top of page

AI Engineering & Safety Architecture

Building Safe & Secure
AI Systems

I'm Alex — an AI Security Engineer who fell into one of the most interesting intersections in tech: making AI systems harder to break. I came up through blue team cybersecurity before pivoting into AI engineering, which means I think about the systems I build differently than most. I don't just ask "does it work?" — I ask "how would someone break it?" That mindset drove me to build red-teaming tools, agentic security middleware, and RAG attack-and-defense pipelines. I'm self-taught, independent, and obsessed with the security challenges that agentic AI is quietly introducing into production systems right now.

My Portfolio

EASE.png

EASE Ethical Decision making framework

EASE is a structured decision-making framework for AI agents focused on safe and ethical action selection. The framework ensures AI systems make well-reasoned decisions by systematically analyzing the environment, generating possible actions, evaluating their safety implications, and electing the best course of action.


https://github.com/Googly-Boogly/EASE

Ethicist_Agent.png

Happy Choices AI

HappyChoicesAI is an AI-driven utilitarian ethicist agent designed to help users make ethical decisions. By analyzing user-inputted dilemmas, HappyChoicesAI suggests the most ethical actions based on utilitarian principles, aiming to maximize happiness and minimize suffering. This project leverages advanced AI technologies to process and evaluate ethical decisions, providing grounded and pragmatic solutions to complex dilemmas.

https://github.com/Googly-Boogly/HappyChoicesAI

DIsaster_recovery.png

Drone Disaster Recovery Simulation

A disaster recovery drone swarm simulation showcasing agentic behaviors. Autonomous drones navigate a 100x100 grid environment, coordinating via pheromone-based communication (not direct messaging) to locate and assist victims. Built on **CrewAI** and **Claude AI (Anthropic)**.

https://github.com/Googly-Boogly/Agent-Showcase

rag_pipeline.png

Chat WIth Docs

RAG Poisoning Demo + Defense is a full-stack security research project that demonstrates real-world adversarial attacks against retrieval-augmented generation pipelines and then implements the mitigations. The attack phase shows three live exploit classes — prompt injection, factual override, and indirect jailbreak — executed against a live ChromaDB vector knowledge base, illustrating how malicious documents can hijack LLM responses at the retrieval layer. The defense phase implements a four-layer mitigation pipeline combining regex input filtering, source trust scoring, semantic anomaly detection, and an LLM-as-judge query classifier, significantly reducing attack surface across both document ingestion and query execution paths. Built with FastAPI, ChromaDB, React, and Docker, the project is aligned with OWASP LLM Top 10 (LLM01) and includes an interactive demo surface for live attack-and-defense walkthroughs, making it a practical reference for securing production RAG deployments.

https://github.com/Googly-Boogly/Chat_With_Docs

wheelchair.png
ChatGPT Image Mar 19, 2026, 08_04_16 PM.png
ChatGPT Image Mar 19.png
AI_SECURITY_AUDIT.png

AI Security Audit & Checklist

Agentic Guardrail Layer is a security middleware system for autonomous AI agents that intercepts every tool call and LLM output before execution, blocking prompt injection, path traversal, shell injection, and data exfiltration attempts in real time. Built in Python with support for both Anthropic and OpenAI backends, it implements a composable pipeline of tool validators, output sanitizers, and anomaly loggers with Pydantic schema enforcement and per-tool rate limiting. All security events are captured in structured JSONL audit trails, providing full observability into agent behavior. The system is containerized with a multi-stage Docker build and runs under a least-privilege non-root runtime, making it production-ready for securing multi-agent workflows against both external attacks and internal privilege escalation.

https://github.com/Googly-Boogly/agent_guardrail_layer

AI Security Audit Framework is a structured assessment tool that enables security teams to systematically evaluate LLM deployments against real-world threat models. Built as a CLI tool and web interface, it guides auditors through a comprehensive checklist covering all ten OWASP LLM Top 10 categories alongside data handling practices, access controls, output monitoring, and supply chain risks. Each audit generates a scored risk report with prioritized remediation guidance, giving security and engineering teams a clear picture of their AI deployment's threat surface and where to focus hardening efforts first. Designed with enterprise workflows in mind, the framework functions as a structured first-pass assessment — the equivalent of OWASP ZAP for traditional web apps, but purpose-built for the unique attack surfaces introduced by LLM-powered applications in production environments.

https://github.com/Googly-Boogly/AI_Security_Audit

computer Vision Electric Wheelchair

Developed an assistive electric wheelchair control system using computer vision and machine learning to enable hands-free directional control. The system tracks and interprets precise hand movements in real time, translating them into movement commands for the wheelchair. This solution was designed specifically for individuals who are unable to operate traditional joystick-based controls.

Built a custom hand-tracking pipeline using a trained computer vision model to detect gesture position, trajectory, and motion intent. Implemented real-time signal processing and movement smoothing to ensure stable, safe navigation. The project focused on accessibility, safety constraints, and low-latency performance to provide a reliable alternative input method for mobility-impaired users.

https://github.com/Googly-Boogly/CV-Wheelchair/

LLM Security Scanner

LLM Security Scanner is an automated red-teaming tool that probes LLM endpoints across multiple providers (OpenAI, Anthropic, Google) for common vulnerabilities including prompt injection, jailbreaks, data leakage, and harmful output generation. It executes 24+ adversarial probes concurrently using Python asyncio, combining regex-based keyword detection with an LLM-as-judge scoring pipeline to evaluate attack success. Results are graded on an A–F vulnerability scale and exported as structured JSON reports with a rich terminal UI, enabling reproducible security audits of production LLM deployments.

https://github.com/Googly-Boogly/LLM_Security_Scanner

Agentic Guardrail Layer

Advanced RAG Pipelines

Deep optimization of retrieval-augmented generation architectures. Implementation of hybrid search mechanisms, vector embeddings, and re-ranking layers to reduce hallucinations in complex domain-specific document sets.

EASE Safety Framework

Architecting robust AI safety protocols through the EASE framework. Developing utilitarian ethicist agents that utilize constrained logic to ensure autonomous decision-making remains aligned with safety benchmarks.

Disaster Mesh Networks

Engineering decentralized pheromone-based communication systems for robotic swarms. Specialized protocols designed for disaster recovery simulations where standard infrastructure is non-existent or compromised.

ARCHITECTING ETHICAL AI SYSTEMS FOR A SECURE FUTURE

As AI systems grow more autonomous, the decisions they make — and the ways they can be manipulated — carry real consequences. I build at the intersection of AI engineering and security because I believe the two are inseparable: a system that can be hijacked, poisoned, or coerced into harmful behavior isn't just a security failure, it's an ethical one. From guardrail layers that intercept malicious tool calls before execution, to safety evaluation frameworks that prevent "technically correct but ethically wrong" AI decisions, my work is driven by a single conviction — that the people building AI systems have a responsibility to think adversarially, design defensively, and never ship something they wouldn't trust to act autonomously in the world.

LET’S SCALE
AI SAFETY
TOGETHER.

Contact Us

We'd love to hear from you. Send us a message and we'll respond as soon as possible.

How can we help you?

Seeking collaboration on AI safety frameworks, disaster recovery simulations, and ethical agent architectures.

AI SAFETY • ETHICAL AGENTS • RAG PIPELINES • DISASTER RECOVERY •

bottom of page