QA Engineer
OrbitShift.AI
Job Description
About OrbitShift
At OrbitShift, we are building the world's first AI-native Sales Operating System, trusted by
enterprise teams to accelerate their GTM motion. Our multi-agent AI system enables actionable
account insights, RFP response generation, targeted nudges, and sales content. Backed by
Peak XV (formerly Sequoia Capital), Stellaris Venture Partners, and other marquee investors,
we are a fast-growing, 3-year-old startup with a team from Amazon, McKinsey, IIT, and
Stanford.
Quality is not a phase at OrbitShift — it is a mindset embedded across the engineering culture.
We are looking for a QA Engineer who takes genuine ownership of product reliability and brings
both the discipline of structured manual testing and the leverage of well-built automation to keep
our platform enterprise-grade as it scales.
Role Overview
As a QA Engineer at OrbitShift, you will be the last line of defence before software reaches
enterprise customers — and an early line of offence in catching problems before they are even
built. You will work closely with product managers, engineers, and designers across the full
development lifecycle: reviewing requirements for testability, designing test strategies, executing
manual test cases, and building automation suites that give the team confidence to ship fast
without breaking things.
This role demands someone who is meticulous without being slow, who can think adversarially
about product behaviour, and who understands that testing AI-powered systems introduces a
layer of non-determinism that standard QA playbooks do not fully cover. You will help define
what quality means for an AI product, not just validate it.
Key Responsibilities
• Test Strategy & Planning: Review PRDs, user stories, and technical designs to identify
testability gaps early. Define test plans, test coverage matrices, and risk-based
prioritisation for each release.
• Manual Testing: Design and execute thorough manual test cases for functional,
regression, exploratory, edge case, and UX validation across web and API surfaces.
Document findings with clarity and reproducibility.
• Automation Engineering: Build, maintain, and extend automated test suites for functional
regression, API contracts, and end-to-end user journeys. Integrate tests into CI/CD
pipelines so quality gates are automated, not manual.
• AI Feature Testing: Develop testing strategies for AI-powered features — including output
consistency checks, prompt regression testing, hallucination detection, and edge case
cataloguing. Work with ML engineers to define acceptable quality thresholds for model
outputs.
• Bug Lifecycle Ownership: File clear, well-evidenced bug reports with steps to reproduce,
expected vs. actual behaviour, severity classification, and supporting artefacts. Track
bugs through to closure and verify fixes rigorously.
• Performance & Load Testing: Design and run performance tests to validate system
behaviour under realistic and peak enterprise load conditions. Surface bottlenecks before
they become customer incidents.
• Release Readiness: Own the release sign-off process — coordinate with engineering and
product to define go/no-go criteria, maintain a release checklist, and communicate quality
status clearly to stakeholders.
• Process Improvement: Continuously evaluate and improve QA processes, tooling, and
coverage. Identify patterns in defects and drive root cause elimination upstream.
What We're Looking For
• 3–6 years of QA experience across manual and automation testing, ideally in a B2B SaaS
or enterprise software environment.
• Proven track record of owning test strategy and execution for complex, multi-component
products in an agile delivery environment.
• Experience testing both frontend web applications and backend APIs — comfortable
context-switching between UI flows and API contracts within the same sprint.
Manual Testing Skills
• Strong command of test case design techniques: equivalence partitioning, boundary value
analysis, decision tables, state transition testing, and exploratory testing heuristics.
• Experience writing detailed, reproducible test cases and maintaining them across evolving
product versions.
• Sharp eye for UX issues, accessibility gaps, and edge cases that automated tests are
unlikely to catch.
• Familiarity with API testing using tools such as Postman, Insomnia, or similar — ability to
validate request/response contracts, authentication flows, and error handling
independently.
Automation Skills
• Hands-on experience building and maintaining automated test suites using frameworks
such as Selenium, Playwright, Cypress, or similar for UI automation.
• Proficiency with at least one scripting or programming language — Python or JavaScript
preferred — to write and debug test scripts without engineering support.
• Experience integrating automated tests into CI/CD pipelines (GitHub Actions, Jenkins,
GitLab CI, or equivalent).
• Familiarity with API automation using REST Assured, Pytest, or similar; experience
validating data-heavy workflows end-to-end.
• Understanding of test data management, environment isolation, and flaky test diagnosis
and remediation.
AI Product Testing
• Exposure to or curiosity about testing non-deterministic AI systems — understanding that
the same input can produce varying outputs and that quality thresholds for AI features
must be defined probabilistically.
• Familiarity with LLM output evaluation concepts (correctness, groundedness, relevance,
tone) is a strong plus.
Nice to Have
• Experience with performance and load testing tools such as k6, JMeter, or Locust.
• Exposure to security testing basics — OWASP Top 10 awareness, input validation
checks, and authentication testing.
• Familiarity with observability and monitoring tools (Datadog, Sentry, or similar) to connect
QA findings with production signals.
• ISTQB Foundation certification or equivalent formal QA training.
You Will Thrive Here If
• You think of quality as a product responsibility, not a gating function. You are proactive,
not reactive.
• You are comfortable operating with autonomy — you do not need to be assigned every
test case; you figure out what needs to be tested and do it.
• You can hold your ground in a room of engineers. You file clear, well-evidenced bugs and
follow through until they are resolved.
• You get genuinely curious when something behaves unexpectedly — not frustrated. You
dig until you understand why.
• You care about the end user. Every bug you catch is a bad experience prevented for an
enterprise customer.
Why Join Us
• Meaningful ownership of quality in a fast-scaling AI product used by enterprise sales
teams globally.
• Work alongside engineers from Amazon, IIT, and Stanford in a team that takes quality
seriously as a cultural value.
• Exposure to cutting-edge AI system testing challenges that are genuinely new — no
established playbook, just clear thinking and good engineering.
• Competitive compensation, growth into senior QA or SDET roles, and the satisfaction of
shipping software that actually works.
*OrbitShift is an equal-opportunity employer. Candidates will not be discriminated against based on race, ethnicity, color, religion, caste, sex, gender identity, sexual orientation, national origin, veteran, or disability status.