harsh04044
New member
Hi everyone,
I’ve been working on adding a testing regime for our installation scripts (bash) and integrating that with Codecov. The tests are in place and running in CI, and we’ve got an “install” flag showing up in Codecov. The problem we’re stuck on is how to get *real* coverage for those shell scripts.
Here’s the situation: our tests mock things like `brew`, `docker`, and `fnm` so we don’t actually run those commands in CI. That’s great for safety and speed, but tools we’ve tried (kcov and bashcov) only see “line executed or not.” So the lines that call those mocked commands don’t get counted as covered, even though we’re testing the behavior. We end up with either no usable coverage or very low numbers, while the tests themselves are quite thorough.
We’ve temporarily used a generated/synthetic coverage report so the install flag appears in Codecov, but that’s not real line coverage. Issue description said we need coverage that includes “all executable code, shell scripts and TypeScript alike,” using only the Codecov GitHub Action. So we need a way to get shell script coverage into Codecov that actually reflects what the tests run.
Has anyone dealt with something similar—getting meaningful bash/shell coverage into Codecov when tests rely heavily on mocks? Or is there a pattern (e.g. how you structure tests or which tool you use) that’s worked for you? Any guidance or references would be really helpful.
Thanks in advance.
Issue : https://github.com/PalisadoesFoundation/talawa-api/issues/4949
PR: https://github.com/PalisadoesFoundation/talawa-api/pull/4972
I’ve been working on adding a testing regime for our installation scripts (bash) and integrating that with Codecov. The tests are in place and running in CI, and we’ve got an “install” flag showing up in Codecov. The problem we’re stuck on is how to get *real* coverage for those shell scripts.
Here’s the situation: our tests mock things like `brew`, `docker`, and `fnm` so we don’t actually run those commands in CI. That’s great for safety and speed, but tools we’ve tried (kcov and bashcov) only see “line executed or not.” So the lines that call those mocked commands don’t get counted as covered, even though we’re testing the behavior. We end up with either no usable coverage or very low numbers, while the tests themselves are quite thorough.
We’ve temporarily used a generated/synthetic coverage report so the install flag appears in Codecov, but that’s not real line coverage. Issue description said we need coverage that includes “all executable code, shell scripts and TypeScript alike,” using only the Codecov GitHub Action. So we need a way to get shell script coverage into Codecov that actually reflects what the tests run.
Has anyone dealt with something similar—getting meaningful bash/shell coverage into Codecov when tests rely heavily on mocks? Or is there a pattern (e.g. how you structure tests or which tool you use) that’s worked for you? Any guidance or references would be really helpful.
Thanks in advance.
Issue : https://github.com/PalisadoesFoundation/talawa-api/issues/4949
PR: https://github.com/PalisadoesFoundation/talawa-api/pull/4972