added classifier notebook with SVM

This commit is contained in:
aj 2021-02-04 13:34:25 +00:00
parent 5e703b011f
commit 01384203db
6 changed files with 553 additions and 95 deletions

View File

@ -1,6 +1,10 @@
# Listening Analysis
Notebooks, [analysis](analysis.ipynb), [artists](artist.ipynb) & [playlist](playlist.ipynb) investigations and other [stats](stats.ipynb).
Notebooks:
* [analysis](analysis.ipynb) for a intro to the dataset and premise
* [artist](artist.ipynb), [album](./album.ipynb) & [playlist](playlist.ipynb) investigations
* [stats](stats.ipynb) for high-level stats about the dataset (Spotify feature miss ratio)
* [playlist classifier](./playlist-classifier.ipynb) using Scikit to classify tracks using the contents of playlists as models
Combining Spotify & Last.fm data for exploring habits and trends
Uses two data sources,

File diff suppressed because one or more lines are too long

View File

@ -33,7 +33,9 @@
"2. Spotify audio features\n",
"\n",
"The two are joined by searching Last.fm tracks on Spotify to get a Uri, the track name and artist name are provided for the query.\n",
"These Uris can be used to retrieve Spotify feature descriptors. `all_joined()` gets a BigQuery of that joins the scrobble time series with their audio features and provides this as a panda frame."
"These Uris can be used to retrieve Spotify feature descriptors. `all_joined()` gets a BigQuery of that joins the scrobble time series with their audio features and provides this as a panda frame.\n",
"\n",
"Explorations are made from [album](./album.ipynb), [artist](./artist.ipynb) and [playlist](./playlist.ipynb) perspectives. "
],
"cell_type": "markdown",
"metadata": {}
@ -75,6 +77,51 @@
"scrobbles.dtypes"
]
},
{
"source": [
"# Spotify Descriptor\n",
"\n",
"The Spotify API provides access to various characteristics about a track, they are used here for exploring listening habits. The descriptions from the [Spotify API Documentation](https://developer.spotify.com/documentation/web-api/reference/#object-audiofeaturesobject) can be seen below:\n",
"\n",
"### acousticness\n",
"A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.\n",
"\n",
"### danceability\n",
"Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.\n",
"\n",
"### energy\n",
"Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.\n",
"\n",
"### instrumentalness\n",
"Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.\n",
"\n",
"### key\n",
"The key the track is in. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on.\n",
"\n",
"### liveness\n",
"Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. \tFloat\n",
"\n",
"### loudness\n",
"The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.\n",
"\n",
"### mode\n",
"Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.\n",
"\n",
"### speechiness\n",
"Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.\n",
"\n",
"### tempo\n",
"The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration.\n",
"\n",
"### time_signature\n",
"An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).\n",
"\n",
"### valence\n",
"A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry)."
],
"cell_type": "markdown",
"metadata": {}
},
{
"cell_type": "code",
"execution_count": 4,

File diff suppressed because one or more lines are too long

311
playlist-classifier.ipynb Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long