#project#cs408

1 Using Automated Static Bug Detection Tools

1.1 Coverity

Accessing Coverity:

Pre-analyzed Code: coverity.cs.purdue.edu:8443 More Info:

Instructions:

  • To view and triage warnings from the analysis:
    1. Select the “Outstanding Issues” item in the navigation tree on the left hand side of the page.
    2. Choose “Tomcat6” from the dropdown menu on the top of the page.
    3. You can click on any warning to view the details of the warning and the source code that triggered the warning.
  • 8 warnings are ‘Major’ severity.
  • Triage and repair these warnings: CID 10102, 10112, 10147, 10381, 10676, 10695, 10700, 11361
Triage: Classify the warnings
- False Positive, Intentional, Bug
- Write classification and explanations directly in report.
False Positive
Warning does not indicate a fault
  • Explain why you believe this is marked as false positive
Intentional
Warning points to code developer intentionally created that is incorrect (contains a fault) or bad practice.
  • Explain how code can be refactored or why it should be left as is
Bug
Warning points to a defect (a fault or bad practice) in the code.

1.2 FB Infer

Instructions

 Run FBInfer on MC machines. 
Get source code from /homes/cs408/tomcat
Follow steps below on MC machines
Expect 10-30 minutes to run the analysis
cp -r /homes/cs408/tomcat . 
cd tomcat 
export PATH=/opt/infer/bin/:$PATH
infer run -- ant
  • In report.txt 1,605 issues total.
  • Triage and repair 8 designated issues
  • Similarly to part 1 identify as False Positive, Intentional, or Bug
  • Issues to repair: 3, 22, 34, 60, 66, 78, 187, 262.

1.3 Discussion of Coverity and FB Infer

Discuss the similarities and differences as well as pros and cons of Coverity and FB Infer as you see appropriate. Use examples.

2 Measuring the coverage of pytorch/torchrec

2.1 Environment Setup

  • See Doc

2.2 Coverage Measurement

Measure statement coverage of test cases of 'torchrec'
Run following command to generate coverage report
cd torchrec-0.2.0
bash test_coverage.sh
  • Paste a screenshot of generated ‘index.html’ and report overall statement coverage by existing test cases.

2.3 Increasing Coverage

2.3.1 Example 1:

Statement coverage inside `quantize_embeddings` not covered by 
TorchRec's existing test cases
Goal: Add test to execute this function and increase coverage
  1. Create new file at torchrec/inference/tests/quantization_tests.py
  2. Add new test case within test class:
  3. See handout
  4. Run test_coverage.sh again

2.3.2 Example 2

torchrec/modules/utils.py
get_module_output_dimension is not covered
  • See Handout for code and further instruction
Task
  • Add new test case in class in torchrec/modules/tests/test_utils.py to test function
  • Describe:
    1. What the function does (helps make up expected input and output of test case).
    2. What your testcase does
    3. Run bash test_coverage.sh to report the new statement coverage of file utils.py
Tips:
  • See handout

2.4 Increase Coverage - Advanced

Use at least 2 techniques described in class to increase overall coverage
  • Cover at least 83 more statements.
Example?
  • Can start with coverage report from 2.3 and select method not 100% covered
    • Use some coverage criterion (say statement coverage) to increase its statement coverage
  • Explicitly document CFG, test cases, coverage increase (before, after), design, and lessons learned.
  • Can use branch coverage to increase branch coverage of one method
  • Can use input space partitioning or logic coverage to increase coverage of one method.
  • Can use tools to automatically generate test cases or detect bugs
Document
  • Test Cases
  • Coverage Increase
    • How many new statements covered?
    • Does overall coverage increase?
  • Your design
  • Lessons learned
  • Anything relevant