<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en" xmlns="http://www.w3.org/2005/Atom"><title>Recent changes to feature-requests</title><link href="https://sourceforge.net/p/pyml/feature-requests/" rel="alternate"/><link href="https://sourceforge.net/p/pyml/feature-requests/feed.atom" rel="self"/><id>https://sourceforge.net/p/pyml/feature-requests/</id><updated>2012-02-06T10:44:12Z</updated><subtitle>Recent changes to feature-requests</subtitle><entry><title>Multiclass ROC</title><link href="https://sourceforge.net/p/pyml/feature-requests/3/" rel="alternate"/><published>2012-02-06T10:44:12Z</published><updated>2012-02-06T10:44:12Z</updated><author><name>Anol Paisal</name><uri>https://sourceforge.net/u/apaisal/</uri></author><id>https://sourceforge.net73edf7aa25ee112ce46e61e5f0aa50b73bdc3fad</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I see PyML.evaluators.roc module, it just has 1 by 1 lable.&lt;br /&gt;
Please include one vs rest roc.&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>Storage of trained kernel data</title><link href="https://sourceforge.net/p/pyml/feature-requests/2/" rel="alternate"/><published>2004-10-26T09:01:01Z</published><updated>2004-10-26T09:01:01Z</updated><author><name>Anonymous</name><uri>https://sourceforge.net/u/userid-None/</uri></author><id>https://sourceforge.net35b144cb8382602049fc89e66dad95e978485104</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;I use your lib for very large datasets. The training&lt;br /&gt;
time is sometimes very long. Is it possible to enhance&lt;br /&gt;
the SVM class with a read/write method ?&lt;/p&gt;
&lt;p&gt;--&lt;br /&gt;
Thilo Wehrmann&lt;br /&gt;
thilo.wehrmann@mail.uni-wuerzburg.de&lt;/p&gt;&lt;/div&gt;</summary></entry><entry><title>matthews correlation coefficient (mcc) et al.</title><link href="https://sourceforge.net/p/pyml/feature-requests/1/" rel="alternate"/><published>2004-09-02T00:05:53Z</published><updated>2004-09-02T00:05:53Z</updated><author><name>ozi</name><uri>https://sourceforge.net/u/jstelzman/</uri></author><id>https://sourceforge.netccff9a1c398eb588e9617aa0d1bf492b9a22fff8</id><summary type="html">&lt;div class="markdown_content"&gt;&lt;p&gt;matthews correlation coefficient (mcc) is a valuable&lt;br /&gt;
tool for model selection.&lt;/p&gt;
&lt;p&gt;Especially if classes are very unevenly distributed you&lt;br /&gt;
can get acceptable values for ROC, ppv, balancedSuccess&lt;br /&gt;
etc. although the classifier assigns all examples to&lt;br /&gt;
the same class. mcc gives 0 in such cases. AFAIK the&lt;br /&gt;
extension to more than two classes ist straightforward&lt;br /&gt;
(i think the Baldi/Brunak book has a formula).&lt;/p&gt;
&lt;p&gt;However pure mcc assigns 0 also to perfect classifiers,&lt;br /&gt;
so one needs probably a measure which is mcc derived. &lt;br /&gt;
The minimum and the standard deviation of the&lt;br /&gt;
performance measures obtained by stratifiedCV are also&lt;br /&gt;
important indicators and should be given together with&lt;br /&gt;
the average.&lt;/p&gt;
&lt;p&gt;In summary I would like to see a bit more flexibility&lt;br /&gt;
in how the best model is selected from e.g. a ParamGrid.&lt;/p&gt;
&lt;p&gt;Thank you.&lt;/p&gt;&lt;/div&gt;</summary></entry></feed>