I got tired of maintaining separate, brittle E2E suites for different platforms (Appium, Espresso, XCUITest). I’m building Talos, a QA agent that is completely agnostic to your tech stack.
The Core Insight:
Because Talos uses AI Computer Vision to "see" the interface, it doesn't care if your app is written in Flutter, React Native, Kotlin, Swift, or Web. If it has pixels, Talos can test it.
How it works:
Generation: It scans your Source Code and Figma designs to auto-generate Natural Language test cases (e.g., "User logs in").
Execution: It drives the simulator/browser, capturing screenshots and logs in real-time.
Verification: The AI analyzes the visual capture against your Figma/Reference images to pass/fail the test, completely ignoring the underlying View Hierarchy.
I’m looking for feedback from mobile devs on different frameworks to see how it handles specific native animations.
I got tired of maintaining separate, brittle E2E suites for different platforms (Appium, Espresso, XCUITest). I’m building Talos, a QA agent that is completely agnostic to your tech stack. The Core Insight: Because Talos uses AI Computer Vision to "see" the interface, it doesn't care if your app is written in Flutter, React Native, Kotlin, Swift, or Web. If it has pixels, Talos can test it. How it works: Generation: It scans your Source Code and Figma designs to auto-generate Natural Language test cases (e.g., "User logs in"). Execution: It drives the simulator/browser, capturing screenshots and logs in real-time. Verification: The AI analyzes the visual capture against your Figma/Reference images to pass/fail the test, completely ignoring the underlying View Hierarchy. I’m looking for feedback from mobile devs on different frameworks to see how it handles specific native animations.
https://github.com/Talos-Tester-AI/Talos
Man this would be incredible (especially if you could import your test suite) for automatic conversion.