Sign in to see all reviews and comparisons. It's Free!
ML-Flex uses machine-learning algorithms to derive models from independent variables, with the purpose of predicting the values of a dependent (class) variable.
Data Mining Software Free
•Configuring Algorithms •Creating an Experiment File •List of Experiment Settings •Running an Experiment •List of Command-line Arguments •Executing Experiments Across Multiple Computers •Modifying Java Source Code •Creating a New Data Processor •Third-party Machine Learning Software Integrating with Third-party Machine Learning Software
Small (<50 employees), Medium (50 to 1000 employees), Enterprise (>1001 employees)
•Configuring Algorithms •Creating an Experiment File •List of Experiment Settings •Running an Experiment •List of Command-line Arguments •Executing Experiments Across Multiple Computers •Modifying Java Source Code •Creating a New Data Processor
What are the benefits?
•Flexible processing of multiple data sets •Delivering experiments across multiple systems •Integrates with third-party machine learning software •Configuring algorithms •Files that convert into templates
Aggregated User Rating
Ease of use
Features & Functionality
Renew & Recommend
Machine-learning algorithms have been developed in a wide variety of programming languages and offer many incompatible ways of interfacing to them. ML-Flex makes it possible to interface with any algorithm that provides a command-line interface.
Aggregated User Rating
You have rated this
ML-Flex uses machine-learning algorithms to derive models from independent variables, with the purpose of predicting the values of a dependent (class) variable. For example, machine-learning algorithms have long been applied to the Iris data set, introduced by Sir Ronald Fisher in 1936, which contains four independent variables (sepal length, sepal width, petal length, petal width) and one dependent variable (species of Iris flowers = setosa, versicolor, or virginica). Deriving prediction models from the four independent variables, machine-learning algorithms can often differentiate between the species with near-perfect accuracy.
One important aspect to consider in performing a machine-learning experiment is the validation strategy. With the wrong kind of validation approach, biases can be introduced, and it may appear that an algorithm has more predictive ability than it has. Cross validation is a commonly used validation strategy that can help avoid such biases.
In cross validation, the data instances are partitioned into "k" number of groups; in turn, each group is held separate ("test" instances), the algorithm derives a model using the remaining "training" instances, and the model is applied to the test instances. The algorithm's performance is evaluated by how well the predictions for the test instances coincide with the actual values being predicted.
A common value for "k" is 10. Another variation is to use "nested" cross-validation within each training set. In this approach, some form of cross validation is used to optimize the model before it is applied to the "outer" test set. Going a step further, many studies also repeat cross validation multiple times on the same data set. This allows them to assess the robustness of their findings as data instances are assigned differently (at random) to folds.
PAT RESEARCH is a B2B discovery platform which provides Best Practices, Buying Guides, Reviews, Ratings, Comparison, Research, Commentary, and Analysis for Enterprise Software and Services. We provide Best Practices, PAT Index™ enabled product reviews and user review comparisons to help IT decision makers such as CEO’s, CIO’s, Directors, and Executives to identify technologies, software, service and strategies.