Machine learning: Difference between revisions

From Computer Science Wiki
 
(3 intermediate revisions by the same user not shown)
Line 66: Line 66:
# [[Cost function]]
# [[Cost function]]
# [[F-measure]]
# [[F-measure]]
# [[Precision]]
# [[Recall]]
# [[Right to anonymity]]
# [[Right to privacy]]
# [[Matrix factorization]]
# [[Hyperparameter]]
# [[Hyperparameter]]
# [[K-nearest neighbour (k-NN) algorithm]]
# [[K-nearest neighbour (k-NN) algorithm]]
# [[Matrix factorization]]
# [[Backpropagation]]
# [[Mean absolute error (MAE)]]
# [[Mean absolute error (MAE)]]
# [[Overfitting]]
# [[Overfitting]]
# [[Popularity bias]]
# [[Popularity bias]]
# [[Precision]]
# [[Recall]]
# [[Reinforcement learning]]
# [[Reinforcement learning]]
# [[Right to anonymity]]
# [[Right to privacy]]
# [[Root-mean-square error (RMSE)]]
# [[Root-mean-square error (RMSE)]]
# [[Stochastic gradient descent]]
# [[Stochastic gradient descent]]

Latest revision as of 07:29, 19 May 2023

Case study notes[1]

Introduction[edit]

Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks.[1] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.[2]



The recommender problem[edit]

Estimate a utility function that automatically predicts how a user will like an item[3]

Based on:

  • Past behavior
  • Relation to other users
  • Item similarity
  • Context

A way to approach this problem[edit]

One way to estimate a utility function that predicts how a user will like an item is to use a machine learning model that takes as input the past behavior of the user, the relation of the user to other users, the similarity of the item to other items, and the context in which the item is being presented. The output of the model would be a prediction of how much the user will like the item.

To build such a model, you would need to gather data on users' past behavior, including ratings or preferences for different items. You would also need data on the relationships between users, such as whether they are friends or share similar interests. The item similarity data could be based on the features of the items or on their co-occurrence with other items in past user behavior. The context in which the item is presented could include information such as the time of day, the user's location, and the user's current activity or task.

Using this data, you could train a machine learning model, such as a decision tree, random forest, or neural network, to predict the user's preference for an item based on these factors. The model could then be used to estimate the utility of an item for a particular user by making a prediction of how much the user will like the item based on the input features.

Approaches to recommendation[edit]

  • Collaborative filtering: recommend on past behavior
    • User based: recommend what other users like who are like me
    • Item based: find similar items to those I have previously liked
  • Content-based: Recommended based on item features
  • Personalized learning to rank: treat recommendations as a tranking problem
  • Demographic: recommend based on user features
  • Social-recommendations: trust-based
  • Hybrid: any combination of the above

Terminology[edit]

  1. Behavioural data
  2. Cloud delivery models:
    1. Infrastructure as a service (IaaS)
    2. Platform as a service (PaaS)
    3. Software as a service (SaaS)
  3. Cloud deployment models
  4. Collaborative filtering
  5. Content-based filtering
  6. Cost function
  7. F-measure
  8. Precision
  9. Recall
  10. Right to anonymity
  11. Right to privacy
  12. Matrix factorization


  1. Hyperparameter
  2. K-nearest neighbour (k-NN) algorithm
  3. Backpropagation
  4. Mean absolute error (MAE)
  5. Overfitting
  6. Popularity bias
  7. Reinforcement learning
  8. Root-mean-square error (RMSE)
  9. Stochastic gradient descent
  10. Training data

not listed in case study but perhaps useful:

  1. Binary classification model

Examples[edit]

An excellent, and I truly mean excellent example is MarI/O, a machine learning program that learns how to play mario, and mario kart.

  1. Super Mario World: https://www.youtube.com/watch?v=qv6UVOQ0F44
  2. Mario Kart: https://www.youtube.com/watch?v=S9Y_I9vY8Qw

[4]


References[edit]