AITF Community Edition?Update
Jason Arbon
??♂? CEO, Checkie.AI | test.ai, Google, Microsoft, Chrome, Search, “Automating the World"
The AITF (AI Test Framework) Community Edition has been in the hands of a few brave souls the past two weeks. Our enterprise customers have the test.ai team to get things setup and running seamlessly, but we are deliberately letting the Community Edition folks try it without our help, to make sure it is ready for the masses. It is a challenge to get such a complicated software and AI stack to work ‘self serve’, but we are well on our way to do it, and luckily we have a smart and generous testing community to get this off the ground.
Note, this isn’t marketing, this is just a team of super-passionate testers rolling out their framework and I intend to do it as openly as possible, sharing the pain as well as the victories. No software is perfect, and getting an AI-based test framework and infrastructure to work ‘out of the box’ for someone with no help other than a setup document is a lofty goal. We are releasing this ‘quality gated’, meaning we aren’t aiming for a particular date, but rather roll it as fast as the feedback seems to support. Overall the plan is to start with one person, then grow the community exponentially.
First User Pain
We made a ‘mistake’ with our very first user. He was a buddy, experienced testers, and well known in the community, but didn’t register in the signup form as a ‘canary’ user or someone who can put up with much pain. We shared the binary file and a 10-page setup document. Here is some of his raw feedback, and probably ‘fodder’ for the philosophers of the testing world.
Installation guide was out of order. I had to install Xcode first before anything else was working.
Obviously a fail in our thinking of the end-user — at test.ai, we all have to install XCode routinely.
Install Android Studio and create a Pixel emulator named “Nexus5” with API 28. Be sure to select API 28 from the x86 tab.WTF is API 28???
Yeah…we are looking to do this more automagically. Setup also covered setup for all platforms, so it looked like someone needed to install everything to get started (not true!).
TestProject nailed this by installing an Agent and it doing everything for you! So easy and fast and no errors.
Great idea, and go TestProject!
I’m probably not the best fit for this right now.It’s taking a lot of my time and I’m not getting paid for it.
Ouch, but yeah, we were looking for folks to dogfood and let us know these types of issues — for free! Given the install time, and the person’s time constraints, we took almost 4 days to get this far in the feedback. This ‘Tester0’, also took the approach, as the documentation suggested, to install *all* possible dependencies for all possible platforms. I'm surprised Tester0 put up with as much as they did before crying uncle! Good news, these are all super fixable/doable issues, and unrelated to the core product. These are also issues we don’t see on our end as most everything is ‘containerized’ and automated, but we are now getting it to run on random machines in the wild, and not everyone is familiar with the dependencies.
Second User Miracle
We next moved to the second user, aka Tester1, someone who is also experienced in testing and has been a fan of AI-based testing — and is signed up as a ‘canary’ user that can deal with some pain :) After we fixed many of the issues noted by Tester0, Tester1 focused only on installing and testing for the web platform. He created several test cases in a single day, and even ran them on different ‘flights/lanes’, start to finish with no assistance. Yeah! Some comments:
The best part is that if the test hit something that causes them to fail they mark the step as a fail, and move on with the test
Exactly! AI, just like humans can often note the change/break, but just keep executing the test case to verify functionality.
All that aside the best part is I did that with only a ten page install document to go off of the product is that intuitive.
We hope to get the install down to a page or two--even then, if folks are comfortable with containers, it might only be a few lines, and in the future we’ll also open up the pure web-hosted version.
And I can’t code my way out of… well anything I simply can’t code
We’ll also open the ‘object model’, but this is a key advantage of codeless solutions — not only do you not have to know how into code, but you can’t code yourself into a brittle corner.
If I were talking to another company about this product, and how useful it is, the time from install to running is the most amazing
Yeehaw! Other frameworks are a huge pain setting up, this is a key thing we are working to solve.
Sure something like an IDE can be fast like that, and ghost inspector, however with each one of those things there are parts of the product they simply can’y, test.ai seems to have no such limitations
Exactly, if you can see it in the UI, you can test it with AITF. Forget the DOM (Document Object Model), forget custom attribute annotations, forget writing and ‘fixing up’ selectors all day.
So, great news. ‘Exponentially’ better feedback and results versus Tester1! These were the first tests developed without the test.ai team overlooking everything. Great progress toward a broader release of the Community Edition
The Next Five
We’ve just sent out invites to the next five ‘canary’ folks, if you haven’t been invited yet, just be a little patient — this is what happens when testers control releases.
Survey Metrics
We have sent out a signup form (it's not too late to signup!) and we asked some basic questions to help prioritize the feature work and make sure we engage folks that are ready and interested. Remember this is a survey biased by people that want to engage AI-based tooling for testing, are part of my indirect network and bothered to fill out the form. In the interest of openness and hoping it might be useful to the broader AI-based testing community, here is an anonymized summary of that data:
Hope this is interesting feedback and progress on the Community Edition, and perhaps some interesting insights into the context of people interested in using it.
Stay tuned for feedback from the ‘Five’ and to be invited yourself. Special thanks to Carlos Kidman (aka Tester0) and Matt DeYoung (aka Tester1) for their expertise, patience, time, and feedback.
— Jason Arbon, founder/CTO, test.ai
Quality Leader | Test Architect | AI Engineer | Speaker
4 年That was my initial experience, but test.ai and team listened to my feedback and were quick to improve. I'm happy to report that I received the next iteration quickly and was able to get everything up and running! My current experience is much like Matt's and it's only getting better :)
Head of testing\Director\Release manager
4 年Thank you, Jason, for the opportunity to take this amazing product for a test drive - yeah pun intended. This is an amazing product. #ai #machinelearning #softwaretesting #testautomation