CommonLounge is a community of learners who learn together. Get started with the featured resources above, ask questions and discuss related stuff with everyone.
Support Vector Machine (SVM) Theory
In this tutorial, we'll describe the mathematics behind Support Vector Machines. This tutorial is going to be much more math heavy as compared to other tutorials. If you're mostly interested in applying SVMs to solve problems, then our first tutorial on SVMs is sufficient. However, if you would like to understand the mathematical basis of Support Vector Machines, then you'll find this tutorial interesting.
In this tutorial, we will focus on the hard-margin SVM and soft-margin SVM. However, we will not be considering kernels or the hyper-parameter \gamma (gamma).
In SVM, the decision function for predicting data points x corresponds to the equation of an hyperplane:
Natural Language Identification: What it is, why it is important, and how it works.
What is natural language identification?
Language Identification is the task of computationally determining the language of some given piece of text data. A text document could be written entirely in a single language such as English, French, German, Spanish (monolingual language identification), or each text document could have multiple languages in different parts.
Why is language identification important?
Language identification is important for most NLP applications to work accurately, since models are usually trained using data from a single language. If a model is trained on English text and...
Machine learning everywhere, why not in Competitive programming? – Anudeep's blog
[...] Let’s talk about some of the issues in CP platforms currently:
There is no recommendation system – Would it not be amazing if you get recommendations for similar problems based on what you have already solved?
Tags are mostly broken – They tell if a problem has math involved or binary search involved but does not give us any information about what in math or how complex it is. There are a lot of problems without tags.
It is very common for beginners to get struck on problems. Currently they search for an editorial or short explanations. Many problems do not have editorials or explanations (SPOJ, lots of regional contests, other coding camp problems, etc).
Also, in the case where an editorial exists, the programmer reads it and gets an idea of the whole solution. So that is a jump from 0 to 1. It would be better to give progressive hints on how to solve the problem, this wil...
Dropout is a widely used regularization technique for neural networks. Neural networks, especially deep neural networks, are flexible machine learning algorithms and hence prone to overfitting. In this tutorial, we'll explain what is dropout and how it works, including a sample TensorFlow implementation.
If you [have] a deep neural net and it's not overfitting, you should probably be using a bigger one and using dropout, ... - Geoffrey Hinton 
Dropout is a regularization technique where during each iteration of gradient descent, we drop a set of neurons selected at random. By drop, what we mean is that we essentially act as if they do not exist.