AI Alignment Research Engineer (AI Labs) Job at Krutrim, Palo Alto, CA

WUk2bGJlNkdKL2tXWmV6bXVVK3k3WXh6THc9PQ==
  • Krutrim
  • Palo Alto, CA

Job Description

Principal Research Scientist, AI Alignment (Reinforcement Learning, Red Teaming, Explainability)

Location: Palo Alto (CA, US)

About Us:

is building AI computing for the future. Our envisioned AI computing stack encompasses the AI computing infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered end applications. We are India’s first AI unicorn and built the first foundation model from the country.

Our AI stack is empowering consumers, startups, enterprises and scientists across India and the world to build their end AI applications or AI models. While we are building foundational models across text, voice, and vision relevant to our focus markets, we are also developing AI training and inference platforms that enable AI research and development across industry domains.

The platforms being built by Krutrim have the potential to impact millions of lives in India, across income and education strata, and across languages.

Job Description:

We are seeking an experienced and visionary Principal Research Scientist to lead our AI Alignment efforts, encompassing Trust and Safety, Interpretability , and Red Teaming . In this critical role, you will oversee teams dedicated to ensuring our AI systems are safe, ethical, interpretable, and reliable . You will work at the intersection of cutting-edge AI research and practical implementation, guiding the development of AI technologies that positively impact millions of lives while adhering to the highest standards of safety and transparency.

Responsibilities:

  1. Provide strategic leadership for the AI Alignment division, encompassing Trust and Safety, Interpretability, and Red Teaming teams.
  2. Oversee and coordinate the efforts of the Lead AI Trust and Safety Research Scientist and Lead AI Interpretability Research Scientist, ensuring alignment of goals and methodologies.
  3. Develop and implement comprehensive strategies for AI alignment, including safety measures, interpretability techniques, and robust red teaming protocols.
  4. Drive the integration of advanced safety and interpretability techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) across our AI development pipeline.
  5. Establish and maintain best practices for red teaming exercises to identify potential vulnerabilities and ensure our models do not generate harmful or undesirable outputs.
  6. Collaborate with product and research teams to define and implement safety and interpretability aspects that ensure our AI models deliver helpful, honest, and transparent outputs.
  7. Lead cross-functional initiatives to integrate safety measures and interpretability throughout the AI development lifecycle.
  8. Stay at the forefront of AI ethics, safety, and interpretability research, fostering a culture of continuous learning and innovation within the team.
  9. Represent the company in industry forums, conferences, and regulatory discussions related to AI alignment and ethics.
  10. Manage resource allocation, budgeting, and strategic planning for the AI Alignment division.
  11. Mentor and develop team members, fostering a collaborative and innovative research environment.
  12. Liaise with executive leadership to communicate progress, challenges, and strategic recommendations for AI alignment efforts.

Qualifications

  1. Ph.D. in Computer Science, Machine Learning, or a related field with a focus on AI safety, ethics, and interpretability.
  2. 7+ years of experience in AI research and development, with at least 3 years in a leadership role overseeing multiple AI research teams.
  3. Demonstrated expertise in AI safety, interpretability, and red teaming methodologies for large language models and multimodal systems.
  4. Strong understanding of advanced techniques such as Reinforcement Learning with Human Feedback ( RLHF), Group Relative Policy Optimization (GRPO), Reinforcement Learning from Verifiable Rewards (RLVR), Direct Preference Optimization (DPO), Proximal Policy Optimization (PPO), Local Interpretable Model-agnostic Explanations (LIME), and SHapley Additive exPlanations (SHAP) and attention-based methods for AI safety and interpretability.
  5. Proven track record of leading teams working on models with 10s and 100s of billions of parameters.
  6. Experience in designing and overseeing comprehensive red teaming exercises for AI systems.
  7. Deep knowledge of ethical considerations in AI development and deployment, including relevant regulatory frameworks and industry standards.
  8. Strong publication record in top-tier AI conferences and journals, specifically in areas related to AI safety, ethics, and interpretability.
  9. Excellent communication and presentation skills, with the ability to convey complex technical concepts to diverse audiences, including executive leadership and non-technical stakeholders.
  10. Demonstrated ability to manage and mentor diverse teams of researchers and engineers.
  11. Strong project management skills with experience in resource allocation and budgeting for large-scale research initiatives.
  12. Visionary mindset with the ability to anticipate future trends and challenges in AI alignment and ethics.

Impact:

As the Principal Research Scientist of AI Alignment, you will play a pivotal role in shaping the future of responsible AI development. Your leadership will ensure that our AI systems are not only powerful and innovative but also safe, interpretable, and aligned with human values. By fostering collaboration between Trust and Safety, Interpretability, and Red Teaming efforts, you will create a holistic approach to AI alignment that sets new industry standards. Your work will be instrumental in building public trust in AI technologies and positioning our company as a leader in ethical and responsible AI development.

Job Tags

Local area,

Similar Jobs

Community Regional Medical Center

Travel RN ECMO - $2,764 per week Job at Community Regional Medical Center

Certification Details BLS State Licensure Job Details ID: einsteinii-CHS-991 Specialty: RN ECMO Description: Open PRN order Job Requirements State: California City: Fresno Specialty: RN ECMO

DR Demo

POTOMAC MILLS Costco Sales Rep $22-25/hr + COMMISSION Job at DR Demo

 ...WE'RE CURRENTLY HIRING FOR THE POTOMAC MILLSCOSTCO! Seeking positive, energetic, and sales-focused professionals who can be passionate...  ...is ideal for people looking to supplement their income with part time work. Compensation: Starting at $22-25 an hour plus... 

The Vanguard, Ann Arbor, MI

Bartender- La Serre Restaurant- The Vanguard, Ann Arbor, MI Job at The Vanguard, Ann Arbor, MI

 ...development and hospitality management company, is currently searching for a remarkable Bartender for La Serre Restaurant inside The Vanguard in Ann Arbor, MI. Job Purpose: The Bartender is often a social point of contact and lasting impression for guests. Warm,... 

Confidential Careers

Pharmacy Technician Job at Confidential Careers

 ...Pharmacy Technician I Dallas, TX Overview The Pharmacy Technician will perform laboratory and pharmacy related duties under the direct...  ...in a fast-paced, highly technical environment required. Ability to lift ~50 lbs. required. Up to 5% travel required.... 

Allied Universal®

Security Guard - Weekends Job at Allied Universal®

 ...security company in the world, Allied Universal! As a Security Guard, you will serve and safeguard clients in a range of industries...  ..., color, religion, sex, sexual orientation, gender identity, national origin, genetic information, disability, protected veteran...