AI & machine learning
Neural and ML work for environments where a wrong answer has a regulator, a plaintiff, or a patient on the other end of it. QGI works on neurosymbolic integration, explainable architectures, and uncertainty-aware deployment — the intersection of ML capability and decision accountability.
Four active threads
Each direction is driven by a regulated-decision use case. We do not pursue ML capability for its own sake — every thread ties back to the question: can this decision be defended?
Thread 01
Neurosymbolic decision engines
Systems that couple neural pattern recognition with symbolic policy and rule engines. Neural handles the messy upstream signal, symbolic delivers the replayable decision trail. The research question is the glue — how to make the handoff sound, auditable, and stable under adversarial inputs.
Thread 02
Explainable architectures for regulated AI
Networks where reasoning is traceable by construction, not reconstructed post-hoc. We study attention alignment with case law, counterfactual generation that is admissible in adjudication, and interpretability that survives contact with a compliance officer.
Thread 03
Uncertainty quantification & abstention
Calibrated confidence, conformal prediction, and learned abstention policies. A model that refuses to answer under ambiguity is more useful in mortgage decisioning than one that always answers with 99% confidence. We measure and engineer the refuse-rate.
Thread 04
LLM governance for high-stakes decisions
Constrained decoding, tool-augmented reasoning, and audit-grade prompt regimes. LLMs are powerful drafters of decisions, but raw prompt-and-answer is not admissible. QGI is building the governance layer that keeps LLM output inside a defensible envelope.
QGI's ML research is not separate from its symbolic work — it's the other half of the same problem. Neural components generate hypotheses. Symbolic components verify, constrain, and produce the audit trail. The platform is built so both can be reasoned about under a single governance contract.
Neural without symbolic is opaque. Symbolic without neural is narrow. QGI bets that regulated decisioning needs both, under one contract.
Publications in preparation
Peer-reviewed output from this sector is in preparation. Until papers are public, the relevant reading is the symbolic-reasoning track record — most AI & ML work at QGI is built on top of that foundation.
For an early look at an AI & ML thread — or to co-author on neurosymbolic verification, uncertainty methods, or LLM governance — see the research collaboration page.
Working on neurosymbolic systems, XAI, calibrated ML, or LLM governance?