[{"data":1,"prerenderedAt":1267},["ShallowReactive",2],{"page-\u002Fadvanced-pytest-architecture-configuration\u002Fpytest-configuration-best-practices\u002Fpytest-markers-for-conditional-test-execution\u002F":3},{"id":4,"title":5,"body":6,"description":77,"extension":1261,"meta":1262,"navigation":194,"path":1263,"seo":1264,"stem":1265,"__hash__":1266},"content\u002Fadvanced-pytest-architecture-configuration\u002Fpytest-configuration-best-practices\u002Fpytest-markers-for-conditional-test-execution\u002Findex.md","Pytest markers for conditional test execution",{"type":7,"value":8,"toc":1251},"minimark",[9,13,18,22,58,71,124,131,135,156,163,304,319,389,408,412,422,441,487,500,545,569,573,579,698,711,739,743,756,777,863,874,877,930,934,952,1014,1036,1042,1045,1105,1108,1112,1122,1125,1154,1157,1161,1182,1204,1222,1235,1247],[10,11,5],"h1",{"id":12},"pytest-markers-for-conditional-test-execution",[14,15,17],"h2",{"id":16},"core-mechanics-of-conditional-markers","Core Mechanics of Conditional Markers",[19,20,21],"p",{},"Conditional test execution in pytest is fundamentally governed by the collection phase, not the execution phase. When pytest discovers test items, it evaluates marker expressions immediately upon module import. This architectural decision ensures that the test scheduler can accurately partition workloads before any fixtures are instantiated or test functions are invoked. Understanding this lifecycle boundary is critical for avoiding race conditions, import-time failures, and silent test suppression.",[19,23,24,25,29,30,33,34,37,38,41,42,45,46,49,50,53,54,57],{},"The two primary conditional markers, ",[26,27,28],"code",{},"pytest.mark.skipif"," and ",[26,31,32],{},"pytest.mark.xfail",", operate with distinct reporting semantics and lifecycle impacts. ",[26,35,36],{},"skipif"," removes the test from the execution queue entirely when its boolean condition evaluates to ",[26,39,40],{},"True",", reporting the item as ",[26,43,44],{},"SKIPPED",". Conversely, ",[26,47,48],{},"xfail"," retains the test in the execution queue but annotates it as an expected failure. If the test unexpectedly passes, pytest's reporting behavior depends on the ",[26,51,52],{},"strict"," parameter. Both markers parse their condition strings using Python's ",[26,55,56],{},"eval()"," equivalent within a restricted namespace during collection. This means any variables, module attributes, or environment checks referenced in the condition must be resolvable at import time.",[19,59,60,61,64,65,70],{},"A frequent production issue stems from unregistered markers. When pytest encounters an undeclared marker, it emits a ",[26,62,63],{},"PytestUnknownMarkWarning",". While this does not halt execution, it degrades CI signal clarity and disables IDE autocompletion. To enforce strict marker hygiene, declare all custom and conditional markers in your configuration file. For comprehensive guidance on structuring these declarations, consult ",[66,67,69],"a",{"href":68},"\u002Fadvanced-pytest-architecture-configuration\u002Fpytest-configuration-best-practices\u002F","Pytest Configuration Best Practices"," before scaling your test matrix.",[72,73,78],"pre",{"className":74,"code":75,"language":76,"meta":77,"style":77},"language-toml shiki shiki-themes github-light github-dark","# pyproject.toml\n[tool.pytest.ini_options]\nmarkers = [\n \"skip_platform: Skip tests based on OS constraints\",\n \"requires_db: Skip tests when database is unavailable\",\n \"xfail_flaky: Mark known intermittent failures\"\n]\n","toml","",[26,79,80,88,94,100,106,112,118],{"__ignoreMap":77},[81,82,85],"span",{"class":83,"line":84},"line",1,[81,86,87],{},"# pyproject.toml\n",[81,89,91],{"class":83,"line":90},2,[81,92,93],{},"[tool.pytest.ini_options]\n",[81,95,97],{"class":83,"line":96},3,[81,98,99],{},"markers = [\n",[81,101,103],{"class":83,"line":102},4,[81,104,105],{}," \"skip_platform: Skip tests based on OS constraints\",\n",[81,107,109],{"class":83,"line":108},5,[81,110,111],{}," \"requires_db: Skip tests when database is unavailable\",\n",[81,113,115],{"class":83,"line":114},6,[81,116,117],{}," \"xfail_flaky: Mark known intermittent failures\"\n",[81,119,121],{"class":83,"line":120},7,[81,122,123],{},"]\n",[19,125,126,127,130],{},"To verify marker attachment before the execution phase begins, leverage ",[26,128,129],{},"pytest --collect-only -v",". This command outputs the complete test tree alongside attached markers without invoking any test logic. If a marker fails to appear in the collection output, the condition likely raised an exception during import, or the marker was applied to a non-test item (e.g., a helper function). Always validate collection output after modifying conditional logic to ensure the marker graph matches your architectural intent.",[14,132,134],{"id":133},"dynamic-condition-evaluation-at-runtime","Dynamic Condition Evaluation at Runtime",[19,136,137,138,140,141,143,144,147,148,151,152,155],{},"Dynamic conditional execution requires careful handling of runtime state during the collection phase. Because marker conditions are evaluated at import time, referencing mutable global state, performing network calls, or querying databases directly inside ",[26,139,36],{}," or ",[26,142,48],{}," expressions will degrade collection performance and introduce non-deterministic behavior. The recommended approach is to leverage deterministic, import-safe attributes such as ",[26,145,146],{},"sys.platform",", ",[26,149,150],{},"os.environ",", or the ",[26,153,154],{},"platform"," module.",[19,157,158,159,162],{},"When constructing boolean expressions, avoid lazy evaluation pitfalls. Python's short-circuiting behavior applies, but pytest's internal parser expects a resolvable boolean. If a condition depends on an external service, defer the check to the ",[26,160,161],{},"pytest_collection_modifyitems"," hook rather than embedding it in the marker. This preserves import-time safety while enabling runtime-aware filtering.",[72,164,168],{"className":165,"code":166,"language":167,"meta":77,"style":77},"language-python shiki shiki-themes github-light github-dark","# test_platform_skip.py\nimport sys\nimport platform\nimport pytest\n\n# Safe import-time evaluation using platform constants\nWINDOWS_ONLY = pytest.mark.skipif(\n sys.platform != \"win32\",\n reason=\"Requires Windows-specific registry APIs\"\n)\n\nLINUX_AND_PYTHON310_PLUS = pytest.mark.skipif(\n sys.platform != \"linux\" or sys.version_info \u003C (3, 10),\n reason=\"Test requires Linux kernel features and Python 3.10+ pattern matching\"\n)\n\n@WINDOWS_ONLY\ndef test_windows_registry_access():\n assert platform.system() == \"Windows\"\n\n@LINUX_AND_PYTHON310_PLUS\ndef test_linux_epoll_integration():\n # Implementation details omitted\n pass\n","python",[26,169,170,175,180,185,190,196,201,206,212,218,224,229,235,241,247,252,257,263,269,275,280,286,292,298],{"__ignoreMap":77},[81,171,172],{"class":83,"line":84},[81,173,174],{},"# test_platform_skip.py\n",[81,176,177],{"class":83,"line":90},[81,178,179],{},"import sys\n",[81,181,182],{"class":83,"line":96},[81,183,184],{},"import platform\n",[81,186,187],{"class":83,"line":102},[81,188,189],{},"import pytest\n",[81,191,192],{"class":83,"line":108},[81,193,195],{"emptyLinePlaceholder":194},true,"\n",[81,197,198],{"class":83,"line":114},[81,199,200],{},"# Safe import-time evaluation using platform constants\n",[81,202,203],{"class":83,"line":120},[81,204,205],{},"WINDOWS_ONLY = pytest.mark.skipif(\n",[81,207,209],{"class":83,"line":208},8,[81,210,211],{}," sys.platform != \"win32\",\n",[81,213,215],{"class":83,"line":214},9,[81,216,217],{}," reason=\"Requires Windows-specific registry APIs\"\n",[81,219,221],{"class":83,"line":220},10,[81,222,223],{},")\n",[81,225,227],{"class":83,"line":226},11,[81,228,195],{"emptyLinePlaceholder":194},[81,230,232],{"class":83,"line":231},12,[81,233,234],{},"LINUX_AND_PYTHON310_PLUS = pytest.mark.skipif(\n",[81,236,238],{"class":83,"line":237},13,[81,239,240],{}," sys.platform != \"linux\" or sys.version_info \u003C (3, 10),\n",[81,242,244],{"class":83,"line":243},14,[81,245,246],{}," reason=\"Test requires Linux kernel features and Python 3.10+ pattern matching\"\n",[81,248,250],{"class":83,"line":249},15,[81,251,223],{},[81,253,255],{"class":83,"line":254},16,[81,256,195],{"emptyLinePlaceholder":194},[81,258,260],{"class":83,"line":259},17,[81,261,262],{},"@WINDOWS_ONLY\n",[81,264,266],{"class":83,"line":265},18,[81,267,268],{},"def test_windows_registry_access():\n",[81,270,272],{"class":83,"line":271},19,[81,273,274],{}," assert platform.system() == \"Windows\"\n",[81,276,278],{"class":83,"line":277},20,[81,279,195],{"emptyLinePlaceholder":194},[81,281,283],{"class":83,"line":282},21,[81,284,285],{},"@LINUX_AND_PYTHON310_PLUS\n",[81,287,289],{"class":83,"line":288},22,[81,290,291],{},"def test_linux_epoll_integration():\n",[81,293,295],{"class":83,"line":294},23,[81,296,297],{}," # Implementation details omitted\n",[81,299,301],{"class":83,"line":300},24,[81,302,303],{}," pass\n",[19,305,306,307,310,311,314,315,318],{},"Environment variable integration requires defensive programming. Direct dictionary access (",[26,308,309],{},"os.environ[\"CI\"]",") will raise ",[26,312,313],{},"KeyError"," during collection if the variable is absent, halting the entire test suite. Use ",[26,316,317],{},".get()"," with explicit type coercion to ensure graceful degradation.",[72,320,322],{"className":165,"code":321,"language":167,"meta":77,"style":77},"# test_env_driven.py\nimport os\nimport pytest\n\n# Graceful handling of missing environment variables\nSKIP_IF_NO_DB_URL = pytest.mark.skipif(\n not os.environ.get(\"TEST_DB_URL\"),\n reason=\"TEST_DB_URL not configured; skipping integration tests\"\n)\n\nRUN_PERFORMANCE_TESTS = pytest.mark.skipif(\n os.environ.get(\"CI_PROFILE\", \"false\").lower() != \"true\",\n reason=\"Performance profiling disabled in current environment\"\n)\n",[26,323,324,329,334,338,342,347,352,357,362,366,370,375,380,385],{"__ignoreMap":77},[81,325,326],{"class":83,"line":84},[81,327,328],{},"# test_env_driven.py\n",[81,330,331],{"class":83,"line":90},[81,332,333],{},"import os\n",[81,335,336],{"class":83,"line":96},[81,337,189],{},[81,339,340],{"class":83,"line":102},[81,341,195],{"emptyLinePlaceholder":194},[81,343,344],{"class":83,"line":108},[81,345,346],{},"# Graceful handling of missing environment variables\n",[81,348,349],{"class":83,"line":114},[81,350,351],{},"SKIP_IF_NO_DB_URL = pytest.mark.skipif(\n",[81,353,354],{"class":83,"line":120},[81,355,356],{}," not os.environ.get(\"TEST_DB_URL\"),\n",[81,358,359],{"class":83,"line":208},[81,360,361],{}," reason=\"TEST_DB_URL not configured; skipping integration tests\"\n",[81,363,364],{"class":83,"line":214},[81,365,223],{},[81,367,368],{"class":83,"line":220},[81,369,195],{"emptyLinePlaceholder":194},[81,371,372],{"class":83,"line":226},[81,373,374],{},"RUN_PERFORMANCE_TESTS = pytest.mark.skipif(\n",[81,376,377],{"class":83,"line":231},[81,378,379],{}," os.environ.get(\"CI_PROFILE\", \"false\").lower() != \"true\",\n",[81,381,382],{"class":83,"line":237},[81,383,384],{}," reason=\"Performance profiling disabled in current environment\"\n",[81,386,387],{"class":83,"line":243},[81,388,223],{},[19,390,391,392,147,395,398,399,402,403,407],{},"Profiling collection overhead is essential when scaling beyond a few hundred tests. Marker conditions that invoke expensive functions (e.g., ",[26,393,394],{},"subprocess.run",[26,396,397],{},"socket.gethostbyname",", or ORM connection probes) will execute once per test module during collection. To measure this impact, run ",[26,400,401],{},"pytest --collect-only --durations=0"," and observe the collection time delta. If collection exceeds 3–5 seconds, replace dynamic checks with memoized boolean flags or precomputed environment snapshots. For deeper insights into how pytest parses and caches marker expressions during the collection phase, refer to ",[66,404,406],{"href":405},"\u002Fadvanced-pytest-architecture-configuration\u002F","Advanced Pytest Architecture & Configuration",".",[14,409,411],{"id":410},"advanced-composition-and-conftest-inheritance","Advanced Composition and Conftest Inheritance",[19,413,414,415,418,419,421],{},"Marker precedence follows a strict hierarchical resolution order: function-level markers override class-level markers, which override module-level markers, which override ",[26,416,417],{},"conftest.py"," markers. This inheritance model enables centralized conditional logic while allowing localized overrides. However, silent marker collisions frequently occur when nested ",[26,420,417],{}," files redefine markers without explicit combination.",[19,423,424,425,428,429,431,432,434,435,428,438,440],{},"When combining multiple conditions, use logical operators directly within the marker expression or stack multiple markers. Pytest evaluates stacked markers using logical ",[26,426,427],{},"AND"," semantics for ",[26,430,36],{}," (all conditions must be ",[26,433,40],{}," to skip) and ",[26,436,437],{},"OR",[26,439,48],{}," (any condition matching triggers the xfail annotation).",[72,442,444],{"className":165,"code":443,"language":167,"meta":77,"style":77},"# conftest.py (root)\nimport pytest\nimport sys\n\n# Global skip condition applied to entire test tree\npytestmark = pytest.mark.skipif(\n sys.version_info \u003C (3, 9),\n reason=\"Suite requires Python 3.9+ type hinting syntax\"\n)\n",[26,445,446,451,455,459,463,468,473,478,483],{"__ignoreMap":77},[81,447,448],{"class":83,"line":84},[81,449,450],{},"# conftest.py (root)\n",[81,452,453],{"class":83,"line":90},[81,454,189],{},[81,456,457],{"class":83,"line":96},[81,458,179],{},[81,460,461],{"class":83,"line":102},[81,462,195],{"emptyLinePlaceholder":194},[81,464,465],{"class":83,"line":108},[81,466,467],{},"# Global skip condition applied to entire test tree\n",[81,469,470],{"class":83,"line":114},[81,471,472],{},"pytestmark = pytest.mark.skipif(\n",[81,474,475],{"class":83,"line":120},[81,476,477],{}," sys.version_info \u003C (3, 9),\n",[81,479,480],{"class":83,"line":208},[81,481,482],{}," reason=\"Suite requires Python 3.9+ type hinting syntax\"\n",[81,484,485],{"class":83,"line":214},[81,486,223],{},[19,488,489,490,492,493,496,497,499],{},"In a subdirectory, a child ",[26,491,417],{}," can refine or override this behavior. However, markers do not automatically merge across ",[26,494,495],{},"conftest"," boundaries. To preserve parent conditions while adding new constraints, explicitly reapply the parent marker or use ",[26,498,161],{}," for programmatic combination.",[72,501,503],{"className":165,"code":502,"language":167,"meta":77,"style":77},"# tests\u002Fintegration\u002Fconftest.py\nimport pytest\nimport os\n\n# Child conftest adds database requirement without overriding parent\npytestmark = pytest.mark.skipif(\n not os.environ.get(\"INTEGRATION_DB_HOST\"),\n reason=\"Integration database host not specified\"\n)\n",[26,504,505,510,514,518,522,527,531,536,541],{"__ignoreMap":77},[81,506,507],{"class":83,"line":84},[81,508,509],{},"# tests\u002Fintegration\u002Fconftest.py\n",[81,511,512],{"class":83,"line":90},[81,513,189],{},[81,515,516],{"class":83,"line":96},[81,517,333],{},[81,519,520],{"class":83,"line":102},[81,521,195],{"emptyLinePlaceholder":194},[81,523,524],{"class":83,"line":108},[81,525,526],{},"# Child conftest adds database requirement without overriding parent\n",[81,528,529],{"class":83,"line":114},[81,530,472],{},[81,532,533],{"class":83,"line":120},[81,534,535],{}," not os.environ.get(\"INTEGRATION_DB_HOST\"),\n",[81,537,538],{"class":83,"line":208},[81,539,540],{}," reason=\"Integration database host not specified\"\n",[81,542,543],{"class":83,"line":214},[81,544,223],{},[19,546,547,548,551,552,554,555,557,558,560,561,564,565,568],{},"To diagnose silent overrides or unexpected marker resolution, execute ",[26,549,550],{},"pytest --trace-config",". This command outputs the exact loading order of ",[26,553,417],{}," files, plugin hooks, and marker registrations. Cross-reference this output with your directory structure to verify inheritance chains. If a test unexpectedly runs despite a parent ",[26,556,36],{},", verify that the child ",[26,559,417],{}," isn't inadvertently shadowing the marker namespace or that a higher-precedence marker (e.g., function-level ",[26,562,563],{},"@pytest.mark.skipif(False)",") isn't overriding the skip condition. Always validate marker precedence using ",[26,566,567],{},"pytest --collect-only -v --trace-config"," to visualize the exact marker stack attached to each test item before execution.",[14,570,572],{"id":571},"cicd-driven-conditional-execution","CI\u002FCD-Driven Conditional Execution",[19,574,575,576,578],{},"Modern CI\u002FCD pipelines require environment-aware test filtering without modifying source files. Hardcoding environment checks into test modules creates maintenance overhead and couples test logic to infrastructure state. The ",[26,577,161],{}," hook provides a clean, centralized mechanism for injecting markers dynamically based on CI environment variables, GitHub Actions matrix parameters, or deployment stage flags.",[72,580,582],{"className":165,"code":581,"language":167,"meta":77,"style":77},"# conftest.py (dynamic injection)\nimport os\nimport pytest\n\ndef pytest_collection_modifyitems(config, items):\n \"\"\"Dynamically inject skip\u002Fxfail markers based on CI environment.\"\"\"\n ci_stage = os.environ.get(\"CI_STAGE\", \"unit\")\n runner_os = os.environ.get(\"RUNNER_OS\", \"Linux\")\n \n skip_slow = pytest.mark.skipif(\n ci_stage == \"pr_check\" and os.environ.get(\"CI_FAST_MODE\", \"false\") == \"true\",\n reason=\"Skipping slow tests during PR fast-check stage\"\n )\n \n skip_windows_specific = pytest.mark.skipif(\n runner_os != \"Windows\",\n reason=\"Windows-specific tests skipped on non-Windows runners\"\n )\n \n for item in items:\n if \"slow\" in item.keywords:\n item.add_marker(skip_slow)\n if \"windows_only\" in item.keywords:\n item.add_marker(skip_windows_specific)\n",[26,583,584,589,593,597,601,606,611,616,621,626,631,636,641,646,650,655,660,665,669,673,678,683,688,693],{"__ignoreMap":77},[81,585,586],{"class":83,"line":84},[81,587,588],{},"# conftest.py (dynamic injection)\n",[81,590,591],{"class":83,"line":90},[81,592,333],{},[81,594,595],{"class":83,"line":96},[81,596,189],{},[81,598,599],{"class":83,"line":102},[81,600,195],{"emptyLinePlaceholder":194},[81,602,603],{"class":83,"line":108},[81,604,605],{},"def pytest_collection_modifyitems(config, items):\n",[81,607,608],{"class":83,"line":114},[81,609,610],{}," \"\"\"Dynamically inject skip\u002Fxfail markers based on CI environment.\"\"\"\n",[81,612,613],{"class":83,"line":120},[81,614,615],{}," ci_stage = os.environ.get(\"CI_STAGE\", \"unit\")\n",[81,617,618],{"class":83,"line":208},[81,619,620],{}," runner_os = os.environ.get(\"RUNNER_OS\", \"Linux\")\n",[81,622,623],{"class":83,"line":214},[81,624,625],{}," \n",[81,627,628],{"class":83,"line":220},[81,629,630],{}," skip_slow = pytest.mark.skipif(\n",[81,632,633],{"class":83,"line":226},[81,634,635],{}," ci_stage == \"pr_check\" and os.environ.get(\"CI_FAST_MODE\", \"false\") == \"true\",\n",[81,637,638],{"class":83,"line":231},[81,639,640],{}," reason=\"Skipping slow tests during PR fast-check stage\"\n",[81,642,643],{"class":83,"line":237},[81,644,645],{}," )\n",[81,647,648],{"class":83,"line":243},[81,649,625],{},[81,651,652],{"class":83,"line":249},[81,653,654],{}," skip_windows_specific = pytest.mark.skipif(\n",[81,656,657],{"class":83,"line":254},[81,658,659],{}," runner_os != \"Windows\",\n",[81,661,662],{"class":83,"line":259},[81,663,664],{}," reason=\"Windows-specific tests skipped on non-Windows runners\"\n",[81,666,667],{"class":83,"line":265},[81,668,645],{},[81,670,671],{"class":83,"line":271},[81,672,625],{},[81,674,675],{"class":83,"line":277},[81,676,677],{}," for item in items:\n",[81,679,680],{"class":83,"line":282},[81,681,682],{}," if \"slow\" in item.keywords:\n",[81,684,685],{"class":83,"line":288},[81,686,687],{}," item.add_marker(skip_slow)\n",[81,689,690],{"class":83,"line":294},[81,691,692],{}," if \"windows_only\" in item.keywords:\n",[81,694,695],{"class":83,"line":300},[81,696,697],{}," item.add_marker(skip_windows_specific)\n",[19,699,700,701,703,704,29,707,710],{},"This hook executes after collection but before fixture setup, ensuring that marker injection does not interfere with test discovery. By reading ",[26,702,150],{}," at this stage, you avoid import-time evaluation risks while maintaining deterministic filtering. This pattern integrates seamlessly with GitHub Actions matrix testing, where ",[26,705,706],{},"matrix.os",[26,708,709],{},"matrix.python-version"," can be mapped to environment variables before invoking pytest.",[19,712,713,714,717,718,147,721,724,725,728,729,731,732,140,735,738],{},"When using ",[26,715,716],{},"pytest-xdist"," for parallel execution, dynamic marker injection impacts worker distribution. Pytest distributes tests after marker evaluation, meaning dynamically skipped tests are removed from the distribution pool before workers are spawned. This can cause load imbalance if a large subset of tests is filtered on specific workers. To mitigate this, ensure marker conditions are deterministic across all workers. Avoid using ",[26,719,720],{},"os.getpid()",[26,722,723],{},"socket.gethostname()",", or worker-specific state in marker conditions. Instead, rely on CI-provided environment variables or pre-filtered test lists passed via ",[26,726,727],{},"pytest -k",". If load balancing remains problematic, consider using ",[26,730,716],{},"'s ",[26,733,734],{},"--dist=loadfile",[26,736,737],{},"--dist=loadscope"," strategies to group related tests before distribution.",[14,740,742],{"id":741},"debugging-and-profiling-marker-resolution","Debugging and Profiling Marker Resolution",[19,744,745,746,748,749,752,753,755],{},"Diagnosing unexpected marker behavior requires isolating the evaluation phase from execution. When a marker evaluates to ",[26,747,40],{}," unexpectedly, the root cause typically lies in environment variable leakage, incorrect boolean coercion, or hook execution order conflicts. Begin by enabling verbose collection logging: ",[26,750,751],{},"pytest --collect-only -v --log-cli-level=DEBUG",". This reveals the exact marker stack attached to each item and highlights any ",[26,754,63],{}," or evaluation exceptions.",[19,757,758,759,761,762,764,765,768,769,772,773,776],{},"To isolate false-positive ",[26,760,48],{}," triggers, examine the test's execution context. An ",[26,763,48],{}," with ",[26,766,767],{},"strict=True"," will report as ",[26,770,771],{},"FAILED"," if the test unexpectedly passes. If this occurs in parallel runs, race conditions in shared state or non-deterministic fixture teardown may be altering the test outcome. Reproduce the issue with ",[26,774,775],{},"pytest -x -s --dist=no"," to disable parallelism and observe the raw execution flow.",[72,778,780],{"className":165,"code":779,"language":167,"meta":77,"style":77},"# profile_collection.py\nimport time\nimport pytest\n\n# Custom marker condition with explicit timing instrumentation\ndef expensive_check():\n start = time.perf_counter()\n # Simulate network\u002FDB probe\n time.sleep(0.05)\n duration = time.perf_counter() - start\n print(f\"[MARKER PROFILING] expensive_check took {duration:.4f}s\")\n return False # Replace with actual logic\n\nEXPENSIVE_SKIP = pytest.mark.skipif(\n expensive_check(),\n reason=\"External service unavailable\"\n)\n",[26,781,782,787,792,796,800,805,810,815,820,825,830,835,840,844,849,854,859],{"__ignoreMap":77},[81,783,784],{"class":83,"line":84},[81,785,786],{},"# profile_collection.py\n",[81,788,789],{"class":83,"line":90},[81,790,791],{},"import time\n",[81,793,794],{"class":83,"line":96},[81,795,189],{},[81,797,798],{"class":83,"line":102},[81,799,195],{"emptyLinePlaceholder":194},[81,801,802],{"class":83,"line":108},[81,803,804],{},"# Custom marker condition with explicit timing instrumentation\n",[81,806,807],{"class":83,"line":114},[81,808,809],{},"def expensive_check():\n",[81,811,812],{"class":83,"line":120},[81,813,814],{}," start = time.perf_counter()\n",[81,816,817],{"class":83,"line":208},[81,818,819],{}," # Simulate network\u002FDB probe\n",[81,821,822],{"class":83,"line":214},[81,823,824],{}," time.sleep(0.05)\n",[81,826,827],{"class":83,"line":220},[81,828,829],{}," duration = time.perf_counter() - start\n",[81,831,832],{"class":83,"line":226},[81,833,834],{}," print(f\"[MARKER PROFILING] expensive_check took {duration:.4f}s\")\n",[81,836,837],{"class":83,"line":231},[81,838,839],{}," return False # Replace with actual logic\n",[81,841,842],{"class":83,"line":237},[81,843,195],{"emptyLinePlaceholder":194},[81,845,846],{"class":83,"line":243},[81,847,848],{},"EXPENSIVE_SKIP = pytest.mark.skipif(\n",[81,850,851],{"class":83,"line":249},[81,852,853],{}," expensive_check(),\n",[81,855,856],{"class":83,"line":254},[81,857,858],{}," reason=\"External service unavailable\"\n",[81,860,861],{"class":83,"line":259},[81,862,223],{},[19,864,865,866,869,870,873],{},"To profile collection bottlenecks, run ",[26,867,868],{},"pytest --durations=0",". This outputs the time spent on each phase: collection, setup, call, teardown, and reporting. If collection dominates the runtime, marker conditions are likely invoking heavy operations. Replace synchronous checks with cached results or precompute boolean flags in a ",[26,871,872],{},"pytest_configure"," hook that runs once per session.",[19,875,876],{},"For step-by-step diagnosis of parallel marker inconsistencies:",[878,879,880,887,896,906,916],"ol",{},[881,882,883,884,886],"li",{},"Run ",[26,885,129],{}," locally and in CI. Compare outputs to identify environment-dependent marker attachments.",[881,888,889,890,892,893,895],{},"Execute ",[26,891,550],{}," to verify ",[26,894,417],{}," loading order matches your directory structure.",[881,897,898,899,901,902,905],{},"Temporarily disable ",[26,900,716],{}," (",[26,903,904],{},"pytest -p no:xdist",") to rule out worker distribution artifacts.",[881,907,908,909,912,913,915],{},"Add ",[26,910,911],{},"print(f\"Item: {item.nodeid}, Markers: {item.own_markers}\")"," inside ",[26,914,161],{}," to log exact marker states before filtering.",[881,917,918,919,922,923,926,927,407],{},"Validate environment variable casing and type coercion. CI systems often inject ",[26,920,921],{},"TRUE","\u002F",[26,924,925],{},"FALSE"," as strings, which evaluate as truthy in Python. Always normalize with ",[26,928,929],{},".lower() == \"true\"",[14,931,933],{"id":932},"edge-cases-and-anti-patterns","Edge Cases and Anti-Patterns",[19,935,936,937,939,940,942,943,945,946,948,949,951],{},"Conditional markers are powerful but prone to subtle anti-patterns that degrade test reliability and suite performance. The most critical is misusing ",[26,938,767],{}," in ",[26,941,48],{}," markers. When ",[26,944,767],{}," is applied, any test that passes is reported as ",[26,947,771],{},". This is intentional for tracking expected failures, but becomes problematic when flaky tests intermittently pass due to timing variations or race conditions. If a test passes unexpectedly, remove ",[26,950,767],{}," temporarily or refactor the test to enforce deterministic failure conditions.",[72,953,955],{"className":165,"code":954,"language":167,"meta":77,"style":77},"# test_xfail_strict.py\nimport pytest\n\n# strict=True converts unexpected passes into hard failures\n@pytest.mark.xfail(\n condition=True,\n reason=\"Known race condition in async event loop\",\n strict=True\n)\ndef test_async_event_ordering():\n # If this passes, pytest reports FAILED\n assert False, \"Simulated expected failure\"\n",[26,956,957,962,966,970,975,980,985,990,995,999,1004,1009],{"__ignoreMap":77},[81,958,959],{"class":83,"line":84},[81,960,961],{},"# test_xfail_strict.py\n",[81,963,964],{"class":83,"line":90},[81,965,189],{},[81,967,968],{"class":83,"line":96},[81,969,195],{"emptyLinePlaceholder":194},[81,971,972],{"class":83,"line":102},[81,973,974],{},"# strict=True converts unexpected passes into hard failures\n",[81,976,977],{"class":83,"line":108},[81,978,979],{},"@pytest.mark.xfail(\n",[81,981,982],{"class":83,"line":114},[81,983,984],{}," condition=True,\n",[81,986,987],{"class":83,"line":120},[81,988,989],{}," reason=\"Known race condition in async event loop\",\n",[81,991,992],{"class":83,"line":208},[81,993,994],{}," strict=True\n",[81,996,997],{"class":83,"line":214},[81,998,223],{},[81,1000,1001],{"class":83,"line":220},[81,1002,1003],{},"def test_async_event_ordering():\n",[81,1005,1006],{"class":83,"line":226},[81,1007,1008],{}," # If this passes, pytest reports FAILED\n",[81,1010,1011],{"class":83,"line":231},[81,1012,1013],{}," assert False, \"Simulated expected failure\"\n",[19,1015,1016,1017,140,1019,1021,1022,140,1025,1028,1029,140,1032,1035],{},"Another frequent anti-pattern is mixing fixture logic with marker conditions. Fixtures execute during the setup phase, while markers evaluate during collection. Attempting to reference fixture return values inside ",[26,1018,36],{},[26,1020,48],{}," will raise ",[26,1023,1024],{},"NameError",[26,1026,1027],{},"AttributeError"," because the fixture has not yet been instantiated. If conditional execution depends on fixture state, use ",[26,1030,1031],{},"pytest.skip()",[26,1033,1034],{},"pytest.xfail()"," inside the test body instead of relying on markers.",[19,1037,1038,1039,1041],{},"Global state in marker conditions introduces non-determinism, especially in ",[26,1040,716],{}," environments. Avoid mutating module-level variables during collection. Each worker process imports test modules independently, meaning global state is not shared. If a marker condition relies on a mutable cache or singleton, workers will evaluate it against uninitialized state, causing inconsistent skipping across the matrix.",[19,1043,1044],{},"Profiling complex boolean expressions reveals hidden CPU and memory overhead. Large test suites with deeply nested logical operators in markers can trigger excessive string parsing and namespace lookups during collection. To optimize, precompute boolean flags at module import time:",[72,1046,1048],{"className":165,"code":1047,"language":167,"meta":77,"style":77},"# Optimized marker evaluation\nimport os\nimport sys\n\n# Precomputed at import time, evaluated once per module\n_IS_CI = os.environ.get(\"CI\", \"false\").lower() == \"true\"\n_IS_WINDOWS = sys.platform == \"win32\"\n\nCI_WINDOWS_SKIP = pytest.mark.skipif(\n _IS_CI and _IS_WINDOWS,\n reason=\"Skipping known CI\u002FWindows incompatibility\"\n)\n",[26,1049,1050,1055,1059,1063,1067,1072,1077,1082,1086,1091,1096,1101],{"__ignoreMap":77},[81,1051,1052],{"class":83,"line":84},[81,1053,1054],{},"# Optimized marker evaluation\n",[81,1056,1057],{"class":83,"line":90},[81,1058,333],{},[81,1060,1061],{"class":83,"line":96},[81,1062,179],{},[81,1064,1065],{"class":83,"line":102},[81,1066,195],{"emptyLinePlaceholder":194},[81,1068,1069],{"class":83,"line":108},[81,1070,1071],{},"# Precomputed at import time, evaluated once per module\n",[81,1073,1074],{"class":83,"line":114},[81,1075,1076],{},"_IS_CI = os.environ.get(\"CI\", \"false\").lower() == \"true\"\n",[81,1078,1079],{"class":83,"line":120},[81,1080,1081],{},"_IS_WINDOWS = sys.platform == \"win32\"\n",[81,1083,1084],{"class":83,"line":208},[81,1085,195],{"emptyLinePlaceholder":194},[81,1087,1088],{"class":83,"line":214},[81,1089,1090],{},"CI_WINDOWS_SKIP = pytest.mark.skipif(\n",[81,1092,1093],{"class":83,"line":220},[81,1094,1095],{}," _IS_CI and _IS_WINDOWS,\n",[81,1097,1098],{"class":83,"line":226},[81,1099,1100],{}," reason=\"Skipping known CI\u002FWindows incompatibility\"\n",[81,1102,1103],{"class":83,"line":231},[81,1104,223],{},[19,1106,1107],{},"This pattern eliminates repeated environment lookups and reduces collection memory footprint. Always benchmark collection time after introducing complex marker logic to ensure scalability.",[14,1109,1111],{"id":1110},"conclusion-and-next-steps","Conclusion and Next Steps",[19,1113,1114,1115,1117,1118,1121],{},"Conditional markers in pytest provide a robust mechanism for environment-aware, platform-specific, and CI-driven test filtering. By understanding that marker evaluation occurs during the collection phase, you can avoid import-time failures, optimize collection performance, and prevent silent overrides. Registering markers in configuration files, leveraging ",[26,1116,161],{}," for dynamic injection, and profiling collection overhead with ",[26,1119,1120],{},"--durations=0"," form the foundation of a maintainable conditional execution strategy.",[19,1123,1124],{},"For production-grade test suites, adhere to these validation steps before merging:",[878,1126,1127,1134,1139,1145,1148],{},[881,1128,1129,1130,1133],{},"Verify marker registration in ",[26,1131,1132],{},"pyproject.toml"," to suppress warnings.",[881,1135,883,1136,1138],{},[26,1137,129],{}," locally and in CI to confirm consistent marker attachment.",[881,1140,1141,1142,1144],{},"Profile collection time with ",[26,1143,1120],{}," and refactor expensive conditions into cached imports.",[881,1146,1147],{},"Test marker behavior across all target platforms and Python versions in your CI matrix.",[881,1149,1150,1151,1153],{},"Disable ",[26,1152,716],{}," temporarily to rule out worker distribution artifacts when debugging inconsistent skips.",[19,1155,1156],{},"As your test architecture scales, consider transitioning from inline markers to custom pytest plugins that encapsulate complex filtering logic. Plugins enable centralized marker resolution, advanced hook integration, and reusable conditional patterns across multiple repositories. Mastering conditional execution is a critical step toward building resilient, high-throughput test pipelines that adapt dynamically to infrastructure state without sacrificing reliability or developer velocity.",[14,1158,1160],{"id":1159},"frequently-asked-questions","Frequently Asked Questions",[19,1162,1163,1170,1171,1174,1175,1178,1179,407],{},[1164,1165,1166,1167,1169],"strong",{},"Why does ",[26,1168,28],{}," not evaluate correctly when using parametrized tests?","\nParametrization occurs during collection, but marker evaluation happens before parameter expansion. If your condition depends on a parameter value, the marker will evaluate against the unexpanded test function. To apply conditional logic to specific parameter sets, use ",[26,1172,1173],{},"pytest.param"," with the ",[26,1176,1177],{},"marks"," argument: ",[26,1180,1181],{},"pytest.param(value, marks=pytest.mark.skipif(condition, reason=\"...\"))",[19,1183,1184,1187,1188,1190,1191,1193,1194,1196,1197,1199,1200,1203],{},[1164,1185,1186],{},"How can I dynamically skip tests based on CI environment variables without modifying test files?","\nImplement ",[26,1189,161],{}," in your root ",[26,1192,417],{},". Read ",[26,1195,150],{}," inside the hook, construct ",[26,1198,28],{}," objects, and attach them to matching items using ",[26,1201,1202],{},"item.add_marker()",". This approach keeps test source files clean and centralizes CI logic.",[19,1205,1206,1212,1213,939,1216,140,1218,1221],{},[1164,1207,1208,1209,1211],{},"What causes ",[26,1210,63],{}," and how do I suppress it safely?","\nPytest warns when it encounters markers not declared in the configuration. Register them under ",[26,1214,1215],{},"[tool.pytest.ini_options] markers",[26,1217,1132],{},[26,1219,1220],{},"pytest.ini",". This enables IDE autocompletion, prevents typos, and ensures consistent marker resolution across the suite.",[19,1223,1224,1227,1228,1230,1231,1234],{},[1164,1225,1226],{},"Can I profile which markers are slowing down test collection?","\nYes. Run ",[26,1229,401],{}," to log collection time per module. Add explicit ",[26,1232,1233],{},"time.perf_counter()"," instrumentation inside marker condition functions to trace evaluation frequency. Replace synchronous external checks with precomputed boolean flags to eliminate bottlenecks.",[19,1236,1237,1243,1244,1246],{},[1164,1238,1239,1240,1242],{},"How do marker conditions interact with ",[26,1241,716],{}," worker distribution?","\nMarkers are evaluated before test distribution. If conditions are non-deterministic or rely on worker-specific state (e.g., ",[26,1245,720],{},"), workers may receive inconsistent test sets, causing load imbalance or unexpected skips. Use deterministic environment variables, pre-filtered test lists, or session-scoped fixtures to ensure uniform evaluation across all workers.",[1248,1249,1250],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":77,"searchDepth":90,"depth":90,"links":1252},[1253,1254,1255,1256,1257,1258,1259,1260],{"id":16,"depth":90,"text":17},{"id":133,"depth":90,"text":134},{"id":410,"depth":90,"text":411},{"id":571,"depth":90,"text":572},{"id":741,"depth":90,"text":742},{"id":932,"depth":90,"text":933},{"id":1110,"depth":90,"text":1111},{"id":1159,"depth":90,"text":1160},"md",{},"\u002Fadvanced-pytest-architecture-configuration\u002Fpytest-configuration-best-practices\u002Fpytest-markers-for-conditional-test-execution",{"title":5,"description":77},"advanced-pytest-architecture-configuration\u002Fpytest-configuration-best-practices\u002Fpytest-markers-for-conditional-test-execution\u002Findex","W64SXa1IUjbsouUPDwgcYv9NqU0M8C1uKqpnLJYXncU",1778004579190]