Jetzt bewerben

Staff Security Engineer (AI Security)

Box Inc.

Warsaw
36000 - 40500 PLN
Festanstellung
🐍 Python
Festanstellung

Must have

  • Security

  • SDLC

  • Automation

  • Cloud

  • English (C1)

Nice to have

  • LLM

  • AI Security

Requirements description

Who you are:

  • Experienced security engineer with 5+ years in application security, DevSecOps, or security tooling, ideally with exposure to AI/ML security challenges.
  • Deep understanding of AI agent architectures, generative AI models, and associated security risks such as prompt injection, adversarial attacks, and autonomous decision-making vulnerabilities.
  • Proven track record implementing security tools and automation (SAST, DAST, SCA, API security scanning) integrated into CI/CD pipelines at scale.
  • Experience with or strong interest in applying LLMs to security use cases, such as code analysis, vulnerability detection, or security documentation.
  • Demonstrated ability to translate security requirements into practical AI applications that enhance the secure development lifecycle.
  • Skilled in threat modeling methodologies and able to adapt traditional frameworks to dynamic AI systems.
  • Proficient in at least one scripting language (e.g. Python) and familiar with multiple programming languages, cloud-native environments and container security.
  • Strong communicator capable of articulating complex AI security concepts to both technical and non-technical stakeholders.
  • Passionate about cybersecurity innovation, with active participation in security communities, conferences, CTFs, bug bounty programs, or CVE submissions preferred.
  • Growth mindset with a proactive approach to learning and problem-solving in fast-evolving technology landscapes.
  • Preferred Skills:
    • Experience working with Security Architecture patterns and context-aware access control mechanisms.
    • Background in adversarial machine learning or AI robustness testing.
    • Contributions to open source AI security projects or research publications in AI safety/security.
    • Experience building or working with LLM-powered developer tools or security automation.
    • Knowledge of prompt engineering techniques to optimize LLM outputs for security applications.
    • Understanding of the limitations of current LLM technologies and strategies to mitigate false positives/negatives in security contexts.

Offer description

**Our compensation structure is the base salary and equity in the form of restricted stock units.

What is Box?

Box (NYSE:BOX) is the leader in Intelligent Content Management. Our platform enables organizations to fuel collaboration, manage the entire content lifecycle, secure critical content, and transform business workflows with enterprise AI. We help companies thrive in the new AI-first era of business. Founded in 2005, Box simplifies work for leading global organizations, including AstraZeneca, JLL, Morgan Stanley, and Nationwide. Box is headquartered in Redwood City, CA, with offices across the United States, Europe, and Asia.

By joining Box, you will have the unique opportunity to continue driving our platform forward. Content powers how we work. It’s the billions of files and information flowing across teams, departments, and key business processes every single day: contracts, invoices, employee records, financials, product specs, marketing assets, and more. Our mission is to bring intelligence to the world of content management and empower our customers to completely transform workflows across their organizations. With the combination of AI and enterprise content, the opportunity has never been greater to transform how the world works together and at Box you will be on the front lines of this massive shift.

Why Box needs you:

We are seeking a highly skilled and visionary Staff Security Engineer to lead the security strategy and implementation for Generative AI and Agentic AI technologies within Box's platform. You will be instrumental in designing, developing, and operationalizing security controls that address the novel risks introduced by autonomous AI agents and generative models. Additionally, you will drive strategic initiatives to leverage LLMs to enhance our secure development lifecycle. Your work will ensure that Box remains a trusted leader in AI-powered content management by embedding security-by-design principles into all AI features and tooling.

Percentage of Time Spent:

  • 40% building the AI Security program
  • 30-40% leading a strategy for building capabilities of generative AI
  • 20-30% partnership with the Engineering Teams

Box lives its values, with community and in-person collaboration being a core part of our culture. Boxers are expected to work from their assigned office a minimum of 3 days per week. Your Recruiter will share more about how we work and company culture during the hiring process.

At Box, we believe unique and diverse experiences benefit our culture, our products, our customers, our company, and our world. We aim to recruit a passionate, high-performing workforce that reflects the world we live in.If you are head-over-heels about this role but unsure if you meet all the requirements, we encourage you to apply!

Your responsibilities

  1. Lead the design and implementation of security architectures specifically tailored for Generative AI and Agentic AI systems, including agentic identity models, least privilege access, runtime guardrails, and audit logging.
  2. Develop threat modeling approaches adapted for dynamic, non-deterministic AI agent behaviors, identifying autonomy-related risks such as prompt injection, tool misuse, agent impersonation, and multi-agent system attacks.
  3. Build and integrate advanced security tooling and automation to detect, prevent, and respond to AI-specific vulnerabilities across the development lifecycle, including adversarial testing frameworks for AI agents.
  4. Spearhead the strategy for integrating LLMs into the secure development lifecycle, including code review automation, vulnerability detection, and security documentation generation.
  5. Design and implement AI-powered security tools that can analyze code, identify potential vulnerabilities, and recommend secure coding patterns at scale.
  6. Lead proof-of-concept initiatives to demonstrate how generative AI can improve security posture through automated threat modeling, security testing, and developer education.

show all (9)

Aufrufe: 1
Veröffentlichtvor 5 Tagen
Läuft abin 28 Tagen
Art des VertragsFestanstellung
Quelle
Logo

Ähnliche Jobs, die für Sie von Interesse sein könnten

Basierend auf "Staff Security Engineer (AI Security)"