stem/CS/Languages/Python.md
andy 22781870cd vault backup: 2023-06-15 20:05:09
Affected files:
.obsidian/workspace-mobile.json
.obsidian/workspace.json
Languages/Spanish/Tenses.md
Money/Assets/Asset.md
Money/Assets/Derivative.md
Money/Assets/Financial Instruments.md
Politcs/Now.md
STEM/CS/Languages/Python.md
2023-06-15 20:05:09 +01:00

1.7 KiB

pip freeze | %{$_.split('==')[0]} | %{pip install --upgrade $_}

From <https://www.activestate.com/resources/quick-reads/how-to-update-all-python-packages/>

poetry cheat sheet

Twisted

Network engine

numpy scipy jupyterlab matplotlib pandas scikit-learn

Nuitka

Compiler

Plotly

Publication-quality Graphs

NLTK NLP

If you are not getting good results, you should first check that you are using the right classification algorithm (is your data well fit to be classified by a linear SVM?) and that you have enough training data. Practically, that means you might consider visualizing your dataset through PCA or t-SNE to see how "clustered" your classes are, and checking how your classification metrics evolve with the amount of data your classifier is given.

If you then confirm that investing in tweaking your linear SVM is the right way to approach your problem, you can look at modifying the class weights. Note then that what you suggest as weights is probably the opposite of what you want to do: you are giving more weights to less frequent classes, marginalizing them further - said differently, you typically want to use weights that are inversely proportional to class frequencies. You can calculate these manually, or you can let sklearn do it automatically for you by specificing class_weight='balanced'.

From <https://stats.stackexchange.com/questions/254779/optimal-class-weight-for-svc>