Generative AI is a tool—powerful and versatile, but always an instrument under human direction and accountability. This framing is not semantic; it is ethical. When we treat AI as a tool rather than an autonomous agent, we maintain clear lines of responsibility: the person using the tool bears accountability for the outcomes it produces. I think best in dialogue. Speaking out loud or typing slows me enough to clarify vision, choose deliberately, and set specific goals. A conversation with a model sharpens requirements and helps me articulate what I need before delegating execution. I retain authorship and final review over every claim, figure, and line of code. The tool does not bear responsibility—I do.
I place AI where it adds leverage: after intent and requirements are set; during documentation, refactoring, and clarity passes; and as a second set of eyes on completeness—never as a substitute for interpretation, architecture, or deriving requirements from physical models. I disclose assistance whenever AI materially shapes text or code, and I avoid confidential or export-controlled inputs. Because science must be reproducible, AI-assisted steps must be reproducible. I document what tools I used, how I applied them, and how I validated outputs. Generative work can be broad—scaffolds, refactors, features, data transformations—but lands in targetted, reviewable increments behind tests and code review. This discipline maintains scientific integrity while capturing AI's strengths.
To make AI effective and safe, I engineer the system around it. Clean repository practices—tests, continuous integration, version control, and clear documentation—scaffold changes that are easy to review and reproduce. I work requirements-first and link each requirement to reviewable commits. Clear expectations keep reviews focused on artifacts and criteria, not people, building shared ownership and trust. This approach places responsibility where it belongs: on human judgment, team culture, and the practices we build together. When wielded deliberately and reviewed rigorously, AI serves discovery without compromising the integrity that makes science communal. To see these principles implemented in concrete infrastructure, explore my work on AI-Assisted Development, where I built SolarWindPy as an intentionally opinionated framework with 7 specialized validation agents that demonstrate how deliberate architecture makes AI collaboration safe and effective for research, production, and regulated software development.