Shagun SodhaniAnalytics and Data Science team @ Adobe Systems · 28w

Can you DM me your mail id?

Read more… (7 words)

Message

Follow

Shagun Sodhani

Analytics and Data Science team @ Adobe Systems

Active In

Deep Learning

9 replies. 34 discussions. Member

Machine Learning

13 discussions. Member

Big Data

1 discussion. Member

Artificial Intelligence

Member

Featured Contributions

NaN.

Q&A session

Q&A session with Shagun Sodhani, Analytics & Data Science team @ Adobe Systemsby Keshav Dhandhania

Hi, I am Shagun Sodhani, a computer science graduate from Indian Institute of Technology (IIT), Roorkee. Presently, I am working with the Analytics and the Data Science team at Adobe Systems. ...

Read more…(226 words)

Copied

reply in this discussion

Shagun SodhaniAnalytics and Data Science team @ Adobe Systems · 28w

Can you DM me your mail id?

Read more… (7 words)

NaN.

Q&A session

Q&A session with Shagun Sodhani, Analytics & Data Science team @ Adobe Systemsby Keshav Dhandhania

Hi, I am Shagun Sodhani, a computer science graduate from Indian Institute of Technology (IIT), Roorkee. Presently, I am working with the Analytics and the Data Science team at Adobe Systems. ...

Read more…(226 words)

Copied

reply in this discussion

Shagun SodhaniAnalytics and Data Science team @ Adobe Systems · 28w

Hi @Ganesh! Thanks for reaching out but I am not working with Adobe any more. I can connect you with some folks though :)

Read more… (24 words)

Contributed 100%

4.

tutorial

TutorialDeep LearningLast updated

Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension

Introduction

- The paper proposes a two-stage synthesis network that can perform transfer learning for the task of machine comprehension.
- The problem is the following:
- We have a domain D
_{S}for which we have labelled dataset of question-answer pairs and another domain D_{T}for which we do not have any labelled dataset. - We use the data for domain D
_{S}to train SynNet and use that to generate synthetic question-answer pairs for domain D_{T}. - Now we can train a machine comprehension model M on D
_{S}and finetune using the synthetic data for D_{T}. - Link to the paper

Read more…(609 words)

Contributed 100%

5.

tutorial

Higher-order organization of complex networks

- The paper presents a generalized framework for graph clustering (clusters of network motifs) on the basis of higher-order connectivity patterns.
- Link to the paper

- Given a motif M, the framework aims to find a cluster of the set of nodes S such that nodes of S participate in many instances of M and avoid cutting instances of M (that is only a subset of nodes in instances of M appears in S).
- Mathematically, the aim is to minimise the motif conductance metric given as
*cut*where_{M}(S, S’) / min[vol_{M}(S), vol_{M}(S’)]*S’*is complement of*S*,*cut*= number..._{M}(S, S’)

Read more…(324 words)

Contributed 100%

6.

tutorial

Network Motifs - Simple Building Blocks of Complex Networks

- The paper presents the concept of “network motifs” to understand the structural design of a network or a graph.
- Link to the paper

- A network motif is defined as “a pattern of inter-connections occurring in complex networks in numbers that are significantly higher than those in randomized networks”.
- In the practical setting, given an input network, we first create randomized networks which have same single node characteristics (like a number of incoming and outgoing edges) as the input network.
- The patterns that occur at a much higher f...

Read more…(227 words)

Contributed 100%

7.

tutorial

TutorialDeep LearningLast updated

Word Representations via Gaussian Embedding

- Existing word embedding models like Skip-Gram, GloVe etc map words to fixed sized vectors in a low dimensional vector space.
- This fixed point setting cannot capture uncertainty about representation.
- Further, these fixed point vectors are compared with measures like dot product and cosine similarity which are not suitable for capturing asymmetric properties like textual entailment and inclusion.
- The paper proposes to learn Gaussian function embeddings (with diagonal covariance) for the word vectors.
- This way, the words are mapped to soft regions in the embedding space which enables modeling uncertainty and asymmetric properties like inclusion and uncertainty.
- Link to the paper

Read more…(415 words)

Contributed 100%

8.

tutorial

TutorialDeep LearningLast updated

HARP - Hierarchical Representation Learning for Networks

- HARP is an architecture to learn low-dimensional node embeddings by compressing the input graph into smaller graphs.
- Link to the paper.
- Given a graph
*G = (V, E)*, compute a series of successively smaller (coarse) graphs*G*. Learn the node representations in_{0}, …, G_{L}*G*and successively refine the embeddings for larger graphs in the series._{L} - The architecture is independent of the algorithms used to embed the nodes or to refine the node representations.
**Graph coarsening technique that preserves global structure**- Collapse edges and stars to preserve first and second order proximity.
**Edge collapsing**- select the subset of*E*s...

Read more…(267 words)

Contributed 100%

9.

tutorial

TutorialDeep LearningLast updated

Swish - a Self-Gated Activation Function

- The paper presents a new activation function called Swish with formulation
*f(x) = x.sigmod(x)*and its parameterised version called Swish-β where*f(x, β) = 2x.sigmoid(β.x)*and β is a training parameter. - The paper shows that Swish is consistently able to outperform RELU and other activations functions over a variety of datasets (CIFAR, ImageNet, WMT2014) though by small margins only in some cases.
- Link to the paper

Read more…(223 words)

Contributed 100%

10.

tutorial

TutorialDeep LearningLast updated

Reading Wikipedia to Answer Open-Domain Questions

- The paper presents a new machine comprehension dataset for question answering in real life setting (say when interacting with Cortana/Siri).
- Link to the paper

- Existing machine comprehension (MC) datasets are either too small or synthetic (with a distribution different from that or real-questions posted by humans). MARCO questions are sampled from real, anonymized user queries.
- Most datasets would provide a comparatively small and clean context to answer the question. In MARCO, the context documents (which may or may not contain the answ...

Read more…(378 words)

Contributed 100%

11.

tutorial

TutorialDeep LearningLast updated

Task-Oriented Query Reformulation with Reinforcement Learning

- The paper introduces a query reformulation system that rewrites a query to maximise the number of “relevant” documents that are extracted from a given black box search engine.
- A Reinforcement Learning (RL) agent selects the terms that are to be added to the reformulated query and the rewards are decided on the basis of document recall.
- Link to the paper
- Implementation

- The underlying problem is as follows: when the end user makes a query to a search engine, the engine o...

Read more…(701 words)

Load More