AI Red Teaming Specialist

AI Red Teaming Specialist (Praca zdalna)

ASTEK Polska

Poland (Remote)
B2B, PERMANENT
💼 B2B
PERMANENT
🤖 AI security
red teaming
LLMs
🧠 adversarial ML
cloud platforms

Podsumowanie

Zatrudnienie jako AI Red Teaming Specialist w polskim oddziale ASTEK. Odpowiedzialność za ocenę bezpieczeństwa systemów AI. Porady dotyczące zabezpieczeń, testowanie i raportowanie. Zdalna praca.

Słowa kluczowe

AI securityred teamingLLMsadversarial MLcloud platforms

Benefity

  • Długoterminowa współpraca
  • Szkolenia techniczne i certyfikacje
  • Mentoring w ramach Centrum Kompetencyjnego
  • Jasna ścieżka kariery
  • Pakiet benefitów pracowniczych: Multisport, opieka medyczna, ubezpieczenie na życie, dofinansowanie do transportu publicznego

Opis stanowiska

About the Astek GroupFounded in France in 1988, Astek Group is a global partner in engineering and IT consulting. Leveraging deep expertise across a wide range of industrial and technological sectors, Astek supports international clients in the development and delivery of products and services, while actively contributing to their digital transformation initiatives.Since its inception, Astek Group has built its growth on a strong culture of entrepreneurship and innovation, as well as on the continuous development of the skills of its more than 10,000 employees, who work every day on diverse and challenging engineering and technology projects.Join a rapidly growing group in France and worldwide, with 2024 revenues of €705 million.For more information, please visit: https://astek.netRole OverviewWe are seeking a proactive and technically skilled AI Red Teaming Specialist to join a security-focused AI/ML project. In this role, you will be responsible for evaluating the security, safety, and resilience of AI systems, with a strong focus on generative, reasoning, and agentic Large Language Models (LLMs).You will adopt an adversarial mindset to identify vulnerabilities and failure modes in AI-driven applications and provide actionable recommendations to harden systems against real-world threats before production deployment.Project ContextThe project is part of an advanced AI security initiative focused on secure, confidential, and resilient AI solutions. The work spans infrastructure, AI workloads, and application-level security, with a strong emphasis on agentic AI systems and next-generation AI threat models.The environment is highly technical, research-driven, and collaborative, involving close interaction with engineering, data science, and product teams.Key ResponsibilitiesConduct AI Red Teaming ExercisesPlan and execute end-to-end red teaming operations and adversarial simulations targeting LLM-powered systems and applicationsDesign & Execute Adversarial AttacksDevelop and perform attacks such as:prompt injectiondata poisoningjailbreakingmodel evasion and misuse scenariosIdentify weaknesses, unsafe behaviors, and failure modes in generative and agentic AI systemsVulnerability Analysis & ReportingSystematically document findings and analyze red teaming resultsProduce clear, high-quality reports describing:identified risks and vulnerabilitiesimpact and severityactionable remediation recommendationsCommunicate results to both technical and non-technical stakeholdersTooling & AutomationContribute to the development and improvement of internal tools and frameworks for AI security testingWork with automated prompt generation and scenario testing tools such as:GarakPyRITcustom red teaming solutionsCross-Team CollaborationWork closely with data science, engineering, and product teamsEnsure AI security considerations are embedded throughout the AI development lifecycleProvide expert guidance on AI security best practicesResearch & Threat IntelligenceContinuously research emerging AI security threats, adversarial ML techniques, and evolving attack vectorsStay current with industry trends, academic research, and real-world incidentsTechnical Requirements (Must Have / Preferred)Strong experience in AI security, adversarial ML, or offensive security rolesHands-on experience red teaming LLMs or generative AI systemsFamiliarity with:OWASP Top 10 for LLMsNIST AI Risk Management FrameworkAI guardrails systems (e.g. Amazon Bedrock Guardrails, NVIDIA NeMo Guardrails)Experience with cloud platforms:AWS, GCP, or AzureUnderstanding of MLOps pipelines and AI deployment workflowsPreferred certifications:Offensive Security Certified Professional (OSCP)Certified AI Red Teaming Professional (CAIRTP)Background in one or more of the following is a plus:content moderationdisinformation analysiscyber-threat intelligenceSoft Skills & AvailabilityStrong analytical and critical thinking skillsClear communication and reporting abilitiesAbility to work with distributed, international teamsWorking hours requirement:overlap with San Francisco timezoneeither:4 days per week until 7:00 PM CET, or2 days per week until 9:00 PM CETWhat We OfferLong-term collaboration – stability and ongoing career opportunitiesTechnical training and certifications – continuous skill development and professional growthMentoring through our Competence Center – from day one, become part of a community that allows you to enhance your skills, participate in conferences, and share knowledge with colleagues facing similar challengesClear career path – transparent progression opportunitiesEmployee benefits package, including:Multisport cardPrivate medical careLife insuranceSubsidy for public transportFriendly work environment – team-building events, social gatherings, and corporate partiesReferral ProgramDo you know someone who might be interested in this offer? Take advantage of our referral program and earn a bonus of up to PLN 7,000!Link: https://astek.pl/system-rekomendacji/Privacy NoticeThe administrator of your personal data is ASTEK Polska sp. z o.o., located in Warsaw (00-133) at Al. Jana Pawła II 22. You have the right to access your data, request its deletion, and other rights regarding personal data. Detailed information on data processing can be found here: https://astek.pl/polityka-prywatnosciYou may withdraw your consent at any time. To withdraw consent, please contact us via email at [email protected] or by writing to the data administrator at the address above.AO217841

Zaloguj się, aby zobaczyć pełny opis oferty

Wyświetlenia: 16
Opublikowanadzień temu
Wygasaza 12 dni
Rodzaj umowyB2B, PERMANENT
Źródło
Logo

Podobne oferty, które mogą Cię zainteresować

Na podstawie "AI Red Teaming Specialist"

Nie znaleziono ofert, spróbuj zmienić kryteria wyszukiwania.