
Job Information
Healthfirst GenAI Application Security Engineer in Remote, Minnesota
Duties and Responsibilities:
Drive the strategic direction of Secure AI Development programs, embedding security into the AI ecosystem.
Advise senior executives, engineering leaders, and stakeholders on AI/ML security risks and mitigation strategies.
Lead security assessments, including threat modeling, risk assessments, and security architecture reviews for GenAI platforms and cloud infrastructure, focusing on AWS Bedrock and other platforms.
Develop and implement security frameworks tailored to AI/ML systems, addressing risks like model poisoning, adversarial AI, and data privacy threats.
Define security best practices for AI model development, deployment, and monitoring to ensure resilience against emerging threats.
Establish security monitoring and automation for GenAI applications, enabling scalable, proactive threat detection.
Conduct secure code reviews, penetration testing, and vulnerability assessments to identify and mitigate AI-specific security risks.
Develop security policies and governance structures aligned with industry regulations (e.g., HIPAA, PCI) and ethical AI standards pertinent to Healthfirst.
Mentor and develop engineers, fostering a security-first mindset across engineering and product teams.
Stay ahead of evolving threats, AI-specific security risks, and industry best practices.
Engage with internal and external stakeholders to ensure regulatory compliance and AI ethics alignment.
Lead and contribute to discussions, presentations, whitepapers, establishing Healthfirst as a leader in AI security.
Support development of incident response plans and mitigation strategies tailored to GenAI applications and environments.
Minimum Qualifications:
Bachelor's Degree in Computer Science or Cyber Security or High School Diploma/GED(accredited) with equivalent work experience.
5 - 8+ years of experience in application security, secure software development, or cybersecurity, with at least 2 - 3+ years focused on AI/ML security or cloud security.
Expertise in secure AI/ML development, including model security risks, adversarial attacks, and ethical AI considerations.
Hands-on experience with cloud platforms, particularly AWS (AWS Bedrock knowledge is a plus).
Proficiency in secure software development, threat modeling, and vulnerability management within AI/ML systems, web apps and API's.
Experience with security testing methodologies, such as SAST, DAST, and SCA.
Strong communication and presentation skills, capable of engaging with executive leadership, technical teams, and external stakeholders.
Proven leadership experience, driving security initiatives, influencing security strategies, and mentoring security teams.
Preferred Qualifications:
Experience with GenAI platforms such as AWS Bedrock, OpenAI, or similar.
Expertise with application security tools (e.g., Veracode, Burp Suite, or other code scanning tools).
Experience in web application and API penetration testing.
Deep understanding of DevSecOps principles, including container security, IaC security, and cloud-native security best practices.
Experience in security governance for AI ethics, data privacy, and regulatory compliance frameworks.
Experience collaborating with regulators, auditors, and compliance teams to ensure AI security governance aligns with industry standards.
Security certifications (e.g., CISSP, AWS Certified Security, OSCP) are a plus.
Compliance and Regulatory Responsibilities: See Above
- License/Certification: See Above
WE ARE AN EQUAL OPPORTUNITY EMPLOYER. Applicants and employees are considered for positions and are evaluated without regard to mental or physical disability, race, color, religion, gender, gender identity, sexual orientation, national origin, age, genetic information, military or veteran status, marital status, mental or physical disability or any other protected Federal, State/Province or Local status unrelated to the performance of the work involved.
Healthfirst
-
- Healthfirst Jobs