- Home
- 应用程序分析
- ML Trainer: Make Training Data
- ML Trainer: Make Training Data Vs. Vision Detector
ML Trainer: Make Training Data 对比 Vision Detector 的使用情况和统计数据
Creating original training data for Image Classification machine learning models just got a little easier!
ML Trainer allows developers to quickly capture and export thousands of images to the Photos app, allowing every image to be imported with iCloud or the built in Image Capture app on Mac.
Press the Scan button to capture a preset amount of images as you move closer to or pivot around your subject, or Tap the Camera button to capture a single picture. Toggle the Flashlight to improve results in low light conditions, and tap the Save button to export any captured images to the Photos app.
While each image is always captured at a speed of 3 Frames Per Second, you can adjust the Frame Count of each Scan in the Settings Menu. A larger Frame Count will save you time, while a lower Frame Count will help improve accuracy across different angles. Enabling Crosshairs and Guides in the Settings Menu can also help improve accuracy.
This app was specifically designed to speed up the process of importing data into the Xcode Developer Tool named CreateML. Exported images should also be compatible with other platforms like TensorFlow and Azure Machine Learning. A wired connection to a macOS device with the Image Capture app open will always be the fastest way to import your data to desktop.
- Apple 应用商店
- 免费
- 开发者工具
商店排名
- -
Vision Detector performs image processing using a CoreML model on iPhones and iPads. Typically, CoreML models must be previewed in Xcode, or an app must be built with Xcode to run on an iPhone. However, Vision Detector allows you to easily run CoreML models on your iPhone.
To use the app, first prepare a machine learning model in CoreML format using CreateML or coreml tools. Then, copy this model into the iPhone/iPad file system, which is accessible through the iPhone's 'Files' app. This includes local storage and various cloud services (iCloud Drive, One Drive, Google Drive, Dropbox, etc.). You can also use AirDrop to store the CoreML model in the 'Files' app. After launching the app, select and load your machine learning model.
You can choose the input source image from:
- Video captured by the iPhone/iPad's built-in camera
- Still images from the built-in camera
- The photo library
- The file system
For video inputs, continuous inference is performed on the camera feed. However, the frame rate and other parameters depend on the device.
The supported types of machine learning models include:
- Image classification
- Object detection
- Style transfer
Models lacking a non-maximum suppression layer, or those that use MultiArray for input/output data, are not supported.
In the local 'Vision Detector' documents folder, you'll find an empty tab-separated values (TSV) file named 'customMessage.tsv'. This file is for defining custom messages to be displayed. The data should be organized into a table with two columns as follows:
(Label output by YOLO, etc.) (tab) (Message) (return)
(Label output by YOLO, etc.) (tab) (Message) (return)
Note: This application does not include a machine learning model.
On the iPhone, you can use the LED torch feature. When the screen is in landscape orientation, touching the screen will hide the UI and switch to full-screen mode.
- Apple 应用商店
- 免费
- 开发者工具
商店排名
- -
ML Trainer: Make Training Data与Vision Detector排名比较
对比 ML Trainer: Make Training Data 与 Vision Detector 在过去 28 天内的排名趋势
排名
没有可用的数据
ML Trainer: Make Training Data 对比 Vision Detector 的排名,按国家/地区比较
对比 ML Trainer: Make Training Data 与 Vision Detector 在过去 28 天内的排名趋势
所有品类
没有可用的数据
开发者工具
通过免费试用版比较任何网站
ML Trainer: Make Training Data VS.
Vision Detector
十二月 14, 2024