literature_search
Search PubMed, ClinicalTrials.gov, bioRxiv/medRxiv, ChEMBL, FDA Orange Book, FDA Purple Book, enterprise sources (Embase, ScienceDirect, Cochrane, Citeline, Pharmapendium, Cortellis), HTA cost reference sources (CMS NADAC, PSSRU, NHS National Cost Collection, BNF, PBS Schedule), LATAM sources (DATASUS, CONITEC, ANVISA, PAHO, IETS, FONASA), APAC sources (HITAP), and HTA appraisal/guidance sources (NICE TAs, CADTH CDR/pCODR, ICER, PBAC PSDs, G-BA AMNOG, HAS Transparency Committee, IQWiG, AIFA, TLV Sweden, INESSS Quebec) for evidence on a drug or indication. Returns structured results including HTA precedents and appraisal decisions with a full audit trail suitable for HTA submissions.
cost_effectiveness_model
Build a cost-utility analysis (ICER, QALY, PSA, sensitivity analysis) for a drug vs comparator. Follows ISPOR good practice guidelines and NICE reference case. Includes probabilistic sensitivity analysis (PSA), one-way sensitivity, and cost-effectiveness acceptability curve (CEAC).
hta_dossier_prep
Structure evidence into HTA body-specific submission format (NICE STA, EMA, FDA, IQWiG, HAS, EU JCA, or Global Value Dossier). Produces draft sections with gap analysis and auto-GRADE evidence quality tables. Accepts output from literature_search and cost_effectiveness_model.
knowledge_search
Search a project's knowledge base (raw/ and wiki/) for text matches. Returns file paths with line numbers and snippets. Use this to find previously-retrieved literature, model runs, and compiled wiki content without re-querying external APIs.
knowledge_read
Read a file from a project's raw/ or wiki/ tree. Path is relative to project root. Only raw/ and wiki/ subtrees accessible.
knowledge_write
Write a file to the project's wiki/ tree. Path MUST start with 'wiki/' and end with '.md'. Use this to compile/organize evidence from raw/ files into a structured knowledge base. Supports Obsidian-style [[wikilinks]].
project_create
Initialize a new HEOR project workspace with directory skeleton and project.yaml metadata. Idempotent — returns existing project if already created. Required before using the `project` parameter in other tools.
evidence_network
Analyze literature search results to build an evidence network map. Extracts intervention-comparator pairs from titles and abstracts, constructs a treatment comparison network, and assesses NMA (network meta-analysis) feasibility. Pass the results array from a prior literature_search call.
indirect_comparison
Compute indirect treatment comparisons using the Bucher method (single common comparator) or frequentist network meta-analysis (full network). Requires user-supplied effect sizes (point estimates + 95% CI) from published trials. Supports mean differences (MD) and ratio measures (OR, RR, HR). Auto-selects method based on network structure, or user can specify.
budget_impact_model
Estimate the total budget impact of adopting a new intervention over 1-5 years. Follows ISPOR Budget Impact Analysis good practice guidelines (Mauskopf 2007, Sullivan 2014). Computes year-by-year net cost to payer, including market share uptake, treatment displacement, and population growth.
population_adjusted_comparison
⚠️ EXPERIMENTAL / orientation-only. Approximate population-adjusted indirect comparison using summary-level statistics (mean, SD per covariate). True MAIC/STC per NICE DSU TSD 18 requires individual patient data (IPD) for one trial. This tool inflates the SE of a Bucher indirect comparison based on covariate imbalance (MAIC-style ESS penalty) and applies a simple linear adjustment based on standardized mean differences (STC-style). Point estimates should be interpreted as approximate — not submission-ready. For a definitive analysis, use IPD with an outcome regression model.
survival_fitting
⚠️ EXPERIMENTAL. Fit parametric survival distributions (Exponential, Weibull, Log-logistic, Log-normal, Gompertz) to Kaplan-Meier SUMMARY data. Returns AIC/BIC model comparison for orientation. IMPORTANT: this fits to KM step data (time, survival proportion, n_at_risk), not individual patient-level events/censoring times. Results are approximate compared to true MLE on IPD. For NICE DSU TSD 14 compliant survival modeling, use IPD with flexsurv (R) or equivalent. Provide n_at_risk on each KM row for better fits — otherwise a default sample size is assumed.
screen_abstracts
Screen literature search results using PICO criteria. Scores each abstract by relevance to the research question, classifies study design, and returns a ranked shortlist with inclusion/exclusion decisions and reasons. Pass the results array from a prior literature_search call (use output_format='json'). Follows Cochrane Handbook Chapter 4 screening methodology.
risk_of_bias
Assess risk of bias for a set of studies using the appropriate Cochrane instrument: RoB 2 (RCTs), ROBINS-I (observational studies), or AMSTAR-2 (systematic reviews/meta-analyses). Instrument is auto-detected from study_type or can be specified. Judgments are inferred from abstract text — domains without sufficient reporting are marked Unclear. Returns a per-study table and a rob_results object to pass to hta_dossier_prep for evidence-based GRADE assessment.
validate_links
Validate URLs by making HEAD requests and checking HTTP status codes. Returns categorization: working (200), browser_only (403 from bot-blocking sites that work in browsers), broken (404/410), or timeout/error. ALWAYS use this before presenting reference links to users — broken links destroy trust. Pass all URLs you plan to cite.