
Putting it All Together: Tools, Training & Trust
Need Tech Help? Tech questions? We're here to help! Email [email protected] and our team will get back to you as soon as possible during the school day.
Over the past two weeks, we've talked about why your child's relationship with AI must be different from yours, and how we're creating clarity around when AI supports learning versus when it shortcuts it. But principles and frameworks are one thing. Making them work in practice is another.
This week, I want to talk about trust.
Trust is complicated with AI in schools. I know many of you are holding two seemingly contradictory concerns at once: worry that we're not doing enough to protect your child from the risks of AI, and worry that we're doing too much—that we're either overreacting or creating systems that feel invasive or controlling.
Both concerns are valid. And here's what I've learned: the only way through that tension is to keep coming back to the same principle. Every decision we make about AI infrastructure—the tools we choose, how we train teachers—has to center on one question: Does this support your child's cognitive development while preparing them to interact responsibly with this technology?
The Tools We've Chosen
When students use AI at Brookwood in approved educational contexts, they're primarily using Flint AI. This isn't a random choice. Flint was designed specifically for educational use with three key features:
First, it's COPPA compliant, which means students under 13 can use it legally and safely. Second, teachers can create custom AI tutors for specific assignments—remember that example from last week about the history teacher creating an AI that asks guiding questions instead of providing answers? That's Flint. Third, teachers can see student interactions with the AI, which allows them to understand how students are thinking, where they're struggling, and when they might need guidance.
Students should not be using ChatGPT, Claude, or other general-purpose AI platforms for schoolwork unless explicitly directed by a teacher. Those platforms lack the educational guardrails, age-appropriate design, and teacher visibility that Flint provides.
Teacher Training: Learning Alongside Students
Here's an uncomfortable truth: most of our teachers are learning about AI at the same time they're teaching about it. We don't have decades of best practices to draw from. What we do have are thoughtful educators willing to experiment, reflect, and adjust.
Our teacher training has focused on three areas: understanding the technology itself (as a Google Workspace school, our teachers have access to premium versions of Gemini and NotebookLM for their own exploration), designing assignments with AI in mind (recognizing that an assignment that worked five years ago might now be completely undermined by AI), and recognizing when students are using AI inappropriately—not to play "gotcha," but to have productive conversations about learning.
We're holding regular faculty workshops, sharing what's working and what isn't, and building a culture where it's okay to say "I tried this approach with AI and it didn't work, here's what I learned."
What This Means for Families
If your child is using AI for schoolwork at home, they should be using Flint unless their teacher has given explicit permission to use a different tool. If they're not sure, they should check with their teacher.
If your child asks to use AI for homework and the teacher hasn't specified, the default answer should be: "Check with your teacher first." This isn't about being rigid—it's about building the habit of thinking before using AI, rather than just reaching for it automatically.
The conversations you have at home about AI matter. When your child tells you about using AI for a project, ask them: "How did the AI help you think differently?" or "What did you learn by using AI for this?" rather than just "Did you get it done?" We're trying to build metacognitive awareness—the ability to think about their own thinking and learning.
We're Still Figuring This Out
I want to end where I started: with honesty about what we don't know. We're building infrastructure and training teachers in real-time, learning as we go.
What we do know is this: AI isn't going away. Our students will interact with it throughout their lives. Our job isn't to protect them from AI—it's to prepare them to use it thoughtfully, critically, and in ways that enhance rather than replace their own thinking.
The infrastructure we're building is in service of that goal. Not perfect, not complete, but guided by a commitment to keeping student development at the center of every decision we make.
David Saunders
Director of Leadership, Changemaking & Technology