AI, Machine Learning, and Deep Learning: What’s Actually Different
Refined notes and thoughts on AI, Machine Learning, and Deep Learning: What’s Actually Different
READ MORE →Resources • Tutorials • Guides
Refined notes and thoughts on AI, Machine Learning, and Deep Learning: What’s Actually Different
READ MORE →Supervised learning algorithms summary for AI security professionals
READ MORE →Unsupervised Algorithm Anomaly detection for AI security professionals. ML
READ MORE →DBSCAN and HDBSCAN for AI security professionals Unsupervised learning algorithm
READ MORE →Gaussian mixture model for AI security professionals Unsupervised learning algorithm
READ MORE →Graph communities and embeddings for AI security professionals for Unsupervised learning algorithm
READ MORE →Unsupervised Algorithm Principal Component Analysis (PCA) for AI security professionals ML
READ MORE →Unsupervised Algorithms overview for AI Security Professionals ML
READ MORE →Unsupervised learning algorithm a quick comparison
READ MORE →Unsupervised learning algorithm K-means clustering for AI security professionals
READ MORE →Notes about Linear and Logistic regression
READ MORE →What supervised learning is, how it works in practice, core algorithms, evaluation, and common pitfalls — explained plainly
READ MORE →Descision trees for security professionals
READ MORE →Naive bayes for AI security professionals
READ MORE →Continue on Supervised algorithms Gradient boosted trees for AI security professionals
READ MORE →Continue supervised algorithms k Nearest neighbor for AI security professionals
READ MORE →Continue on Supervised algorithms for ML
READ MORE →Support vector machine for AI security professionals Machine Learning algorithms
READ MORE →Overview for AI Security Scoping: What Organizations Get Wrong Serie
READ MORE →Single-turn testing dominates. Real attacks happen across conversations. What gets missed: Context manipulation across multiple turns Session isolation between users Context window overflow behavior Cross-turn instruction injection Adversaries condition models gradually. Single-turn tests never see this. Article 2 in the Scoping series
READ MORE →Organizations use OpenAI or Anthropic APIs. Scope says “test the model” when the model is a black box. What gets missed: Trust boundary definition What’s actually testable (integration, not the model) Data sent to third-party APIs Compliance implications You can’t test the model. You can test your integration with it.
READ MORE →Security teams test deployed models. AI teams train models. The pipeline between them is untested. What gets missed: Training data sources and integrity Data poisoning opportunities Fine-tuning risks Supply chain for models and datasets If adversaries can influence training data, they control the model.
READ MORE →Security teams test technical vulnerabilities. Safety issues are “someone else’s problem” until they create legal liability. What gets missed: Regulatory compliance (GDPR, EU AI Act) Bias and discrimination Harmful content generation Alignment failures Safety failures create organizational risk just like security failures.
READ MORE →Point-in-time assessments work for static systems. AI systems change constantly. What gets missed: Model updates (by providers or through retraining) Prompt modifications Integration changes Training data updates Testing in January is obsolete by March if the system changed.
READ MORE →Assessments find vulnerabilities. Logs detect exploitation. Most scoping focuses on finding, not detecting. What gets missed: What security events are logged Log retention and security Monitoring and alerting capabilities Incident investigation procedures Finding vulnerabilities matters less if exploitation goes undetected.
READ MORE →Generic RFPs produce generic proposals that lead to generic testing. What gets missed: System architecture details vendors need Specific test scenarios required Access and constraint information Realistic timelines and budgets Bad RFPs result in proposals that don’t match actual needs.
READ MORE →Organizations have IR plans for traditional breaches. AI incidents don’t fit existing procedures. What gets missed: AI-specific incident categories Model-specific containment strategies Investigation procedures for AI failures Recovery approaches for compromised models When AI incidents occur, traditional IR doesn’t apply.
READ MORE →Models with function calling can execute code and query databases. Scoping ignores the execution layer. What gets missed: Authorization for function calls Parameter validation (SQL injection, command injection, path traversal) Function call chains Rate limiting on expensive functions Testing the model doesn’t cover what the model can do through tools.
READ MORE →Article 1: Model Integration Points Nobody Tests Most scoping focuses on the model while ignoring the integration layer where actual vulnerabilities exist. What gets missed: How user input becomes model prompts (string concatenation, JSON encoding, validation) How applications authenticate to model APIs (API key storage, rotation) How model outputs get processed (HTML rendering, code execution, database queries) Rate limiting, error handling, session management The model does what it's designed to do. The vulnerabilities are in how applications integrate with models.
READ MORE →Uriel Kosayev talk from Orange Systems Inspiration Session about AI
READ MORE →Google research nested learning
READ MORE →Excellent video serie explaining AI
READ MORE →Mcp security cheatsheet
READ MORE →The pdf top owasp top 10 for llms
READ MORE →