Teachable Machine is a tool based on Web that makes it possible to quickly create machine learning models, which teachers and students have used to explain different topics in a more interactive alternative. In addition, it is compatible with Google Drive.
This website is for anyone who has an idea they want to explore, such as artists, students, innovators, or creators of all kinds. Also, no prior knowledge of machine learning is required. Here you can create unique commands that use Artificial Intelligence to identify any sound, movement or person.
On this website, you collect and group the examples you want the computer to learn by classes or categories, where you can prepare the model and test it on the fly to see if it can correctly classify the new examples. Likewise, these projects can be exported for sites, applications and more. The template can be downloaded or hosted online.
It may interest you: Truth or lie: putting magnets in the fridge consumes more energy

Teachable Machine
The Teachable Machine uses files or captures live examples. In addition, according to the entity, “it is a tool that respects the way of working, and can be used on the device, without any data from the webcam or microphone leaving the website”
The models that are created with the platform are of TensorFlow.js that work anywhere with javascript. Therefore, they are compatible with tools like glitch, P5.js, Node.js and many others.
Also, you can export the models to different formats for use on other platforms.
It may interest you: Virtual reality would be affecting vision for this reason

stopping postures
Pose detection means that from an image of a person, you can estimate the pose, such as where the arms, legs, head, and joints are. To start training the machine, you must first create different categories, or classes, to teach it.
There you can do different classes like “No Tilt”, “Left Tilt” and another called “Right Tilt”. Later, some sample poses can be added for each of these classes.
Also, if you need to record samples without holding the button down, you can go to the settings panel and turn off Hold to Record, and it will count down and then record a set of samples automatically.
It may interest you: What is spyware and how to know if there is one on the cell phone
And then you can click on train and once the model is completed you can see the result of the poses that were made.

audio projector
There’s also building a machine learning model to detect clicks, claps, and hisses using audio clips. To use it, you must first create different categories, or classes.
The platform provides two checkboxes: “Background noise” and “class 2″. These are the initial classes that the Teachable Machine handles when performing any project. This will allow you to create sound commands with people who cannot speak, in order to identify what they are asking for at the moment.
It may interest you: How to take advantage of the new artificial intelligence of Meta to do research
To use it, you’ll need a background noise class to detect when no noise is being produced. And because the background noise in a forest is different than in an office (or anywhere else), you should give that class audio samples for anywhere you plan to use your model.
This particular machine learning technology learns from samples that are one second long. Classes need at least 8 1-second sound samples to train correctly. The more data classes are given to learn, the better they will rank.