Preskoči na glavno vsebino
e-učilnica UP FAMNIT
  • Slovenščina ‎(sl)‎
    English ‎(en)‎ Slovenščina ‎(sl)‎
Trenutno uporabljate gostujoči dostop (Prijavite se)

ELBA_Dushanbe

  1. Domov
  2. Predmeti
  3. Razno
  4. DIST
  5. ELBA_Dushanbe
  6. Evaluation
  7. Exercise #6: Comparing different evaluation strategies and measures

Exercise #6: Comparing different evaluation strategies and measures

Zahteve zaključka
Odprto: ponedeljek, 30. maj 2022, 08.30

In this assignment the assistant will guide you in applying different evaluation methods on chosen datasets.
You will also learn how to use the WEKA's Experimenter tool to compare the performance of different algorithms on different datasets.

You will be using WEKA's sample datasets that you can find in the data folder of your WEKA installation.
Or, you can also download these sample data sets from the e-classroom
(subfolder "Datasets" in the "Practice #7: Evaluation" folder).
You will need also the Evaluation-SurnameName.txt file (download it form the same location in the e-classroom as the sample data) that you shall use to enter the results into and submit when finished.

Entering the results in the Evaluation-SurnameName.txt file:
look for the "___" (3 consecutive underscore characters) and replace them with the actual result (number/answer).
Use the default algorithm parameters in WEKA, if not otherwise specified.

So, let's get to it!

Step #1 - overfitting:
Open the glass.arff  file in WEKA, run the J48 classifier on the loaded data, try different evaluation strategies and fill in the answers.

Step #2 - train/test split and randomization:
Open (our "old friend") the iris.arff  file in WEKA, run the J48 classifier on the loaded data and set "Test options" to "Percentage split (66%)", run with/without randomization and fill in the answers.

Step #3 - use the Experimenter:
Open the Experimenter tool in WEKA and click on the "New" button.
Add the following files in the "Datasets" section of the window by clicking on the "Add new..." button for each file:
contact-lenses.arff
diabetes.arff
glass.arff
hypothyroid.arff
ionosphere.arff
iris.arff
labor.arff
unbalanced.arff
In the "Algorithms" section of the window, add the classifiers ZeroR, OneR and J48 (all with default parameters) following a similar procedure like for the datasets.
Run the experiment by going to the "Run" tab and clicking on the "Start" button. Wait until the experiment is completed
(there should be a "Not running" message in the "Status" section of the window and messages "Finished" and "There were 0 errors" in the "Log" section).
Proceed to the "Analyse" tab.
Click the "Experiment" button. The "Test output" section should fill with the "Available resultsets".
In the "Configure test", set the "Show std. deviations" checkmark and select J48 as the "Test base" classifier.
Click on "Perform test", check the answers in the "Test output" and fill them in the Evaluation-SurnameName.txt  file as requested.


Rename the final TXT file as "Evaluation-<SurnameName>.txt"
(example: Evaluation-KavsekBranko.txt) and submit it here!

  • contact-lenses.arff contact-lenses.arff
    31. maj 2022, 20:24
  • diabetes.arff diabetes.arff
    31. maj 2022, 20:24
  • Evaluation-SurnameName.txt Evaluation-SurnameName.txt
    31. maj 2022, 20:24
  • glass.arff glass.arff
    31. maj 2022, 20:24
  • hypothyroid.arff hypothyroid.arff
    31. maj 2022, 20:24
  • ionosphere.arff ionosphere.arff
    31. maj 2022, 20:24
  • iris.arff iris.arff
    31. maj 2022, 20:24
  • labor.arff labor.arff
    31. maj 2022, 20:24
  • unbalanced.arff unbalanced.arff
    31. maj 2022, 20:24
◄ Tutorial #6: Evaluation
Regression and nearest neighbors ►

Bloki

Bloki

Stik s skrbnikom strani
Trenutno uporabljate gostujoči dostop (Prijavite se)
ELBA_TJ
  • Slovenščina ‎(sl)‎
    • English ‎(en)‎
    • Slovenščina ‎(sl)‎
Povzetek hrambe podatkov
Pridobi mobilno aplikacijo