Skip to main content
e-učilnica UP FAMNIT
  • English ‎(en)‎
    English ‎(en)‎ Slovenščina ‎(sl)‎
You are currently using guest access (Log in)

ELBA_Dushanbe

  1. Home
  2. Courses
  3. Razno
  4. DIST
  5. ELBA_Dushanbe
  6. Evaluation
  7. Exercise #6: Comparing different evaluation strategies and measures

Exercise #6: Comparing different evaluation strategies and measures

Completion requirements
Opened: Monday, 30 May 2022, 8:30 AM

In this assignment the assistant will guide you in applying different evaluation methods on chosen datasets.
You will also learn how to use the WEKA's Experimenter tool to compare the performance of different algorithms on different datasets.

You will be using WEKA's sample datasets that you can find in the data folder of your WEKA installation.
Or, you can also download these sample data sets from the e-classroom
(subfolder "Datasets" in the "Practice #7: Evaluation" folder).
You will need also the Evaluation-SurnameName.txt file (download it form the same location in the e-classroom as the sample data) that you shall use to enter the results into and submit when finished.

Entering the results in the Evaluation-SurnameName.txt file:
look for the "___" (3 consecutive underscore characters) and replace them with the actual result (number/answer).
Use the default algorithm parameters in WEKA, if not otherwise specified.

So, let's get to it!

Step #1 - overfitting:
Open the glass.arff  file in WEKA, run the J48 classifier on the loaded data, try different evaluation strategies and fill in the answers.

Step #2 - train/test split and randomization:
Open (our "old friend") the iris.arff  file in WEKA, run the J48 classifier on the loaded data and set "Test options" to "Percentage split (66%)", run with/without randomization and fill in the answers.

Step #3 - use the Experimenter:
Open the Experimenter tool in WEKA and click on the "New" button.
Add the following files in the "Datasets" section of the window by clicking on the "Add new..." button for each file:
contact-lenses.arff
diabetes.arff
glass.arff
hypothyroid.arff
ionosphere.arff
iris.arff
labor.arff
unbalanced.arff
In the "Algorithms" section of the window, add the classifiers ZeroR, OneR and J48 (all with default parameters) following a similar procedure like for the datasets.
Run the experiment by going to the "Run" tab and clicking on the "Start" button. Wait until the experiment is completed
(there should be a "Not running" message in the "Status" section of the window and messages "Finished" and "There were 0 errors" in the "Log" section).
Proceed to the "Analyse" tab.
Click the "Experiment" button. The "Test output" section should fill with the "Available resultsets".
In the "Configure test", set the "Show std. deviations" checkmark and select J48 as the "Test base" classifier.
Click on "Perform test", check the answers in the "Test output" and fill them in the Evaluation-SurnameName.txt  file as requested.


Rename the final TXT file as "Evaluation-<SurnameName>.txt"
(example: Evaluation-KavsekBranko.txt) and submit it here!

  • contact-lenses.arff contact-lenses.arff
    31 May 2022, 8:24 PM
  • diabetes.arff diabetes.arff
    31 May 2022, 8:24 PM
  • Evaluation-SurnameName.txt Evaluation-SurnameName.txt
    31 May 2022, 8:24 PM
  • glass.arff glass.arff
    31 May 2022, 8:24 PM
  • hypothyroid.arff hypothyroid.arff
    31 May 2022, 8:24 PM
  • ionosphere.arff ionosphere.arff
    31 May 2022, 8:24 PM
  • iris.arff iris.arff
    31 May 2022, 8:24 PM
  • labor.arff labor.arff
    31 May 2022, 8:24 PM
  • unbalanced.arff unbalanced.arff
    31 May 2022, 8:24 PM
◄ Tutorial #6: Evaluation
Regression and nearest neighbors ►

Blocks

Supplementary blocks

Contact site support
You are currently using guest access (Log in)
ELBA_TJ
  • English ‎(en)‎
    • English ‎(en)‎
    • Slovenščina ‎(sl)‎
Data retention summary
Get the mobile app