AI Saturdays – Medium Making rigorous AI education accessible and free, in 50+ cities globally. Sign up at https://nurture.ai/ai-saturdays – Medium
- Bridging The Artificial Intelligence (AI) Gaps With AI6by Tejumade Afonja on January 4, 2019 at 1:19 pm
Bridging The Artificial Intelligence (AI) Gap With AISaturdays — Our Journey so far “A Candle Loses Nothing by Lighting Another Candle.”-Italian Proverb AI Saturdays (AI6) is a free-to-attend, community-driven, non-profit and global movement organized in 50+ cities across the globe, including Bangalore, Lagos, Toronto, Singapore and Sunnyvale with the mission to make Artificial Intelligence education at the quality and rigour of the world’s best universities accessible to anyone through structured study groups. These study groups are held in cohorts for 14–16 weeks every Saturday, with structured materials and sessions targeted for people at skill of all levels. AI Saturdays Lagos— The beginning With the burning desire and passion to equip ourselves and others with Artificial Intelligence skills, the mission and vision of global AI6 resonated with Azeez Oluwafemi and I the moment it was announced. We were quick to kick-start AI6 in our local community, Lagos, Nigeria on January 6th 2018. We organized and facilitated 16 weeks of study group from January till April and then from August till November. As ambassadors, we played a pivotal role in empowering the AI community in our city to get updated with the latest methods in artificial intelligence, and be able to implement the most cutting-edge AI models out there. For our first cycle, we walked our participants through Computer Vision from Stanford CS231n and Fast AI deep learning for coders. We garnered extremely enthusiastic participants who dedicated their time and effort into learning, teaching and sharing. Our participants built different group projects and personal projects ranging from exploring different deep learning frameworks to building a classifier for Nigerian Currency Notes and building an artistic style transfer model. The first cycle ended with a bang! First Lagos AI Hackathon After the first cycle, we co-organized a 2 day computer vision hackathon with Lagos Women in Machine Learning and Data science. This hackathon was sponsored by Intel and it held on 7th and 8th of April 2018 with a total of 50 participants, 6 females and 44 males. The theme of the hack was focused on deep learning and the challenge was to classify food image data. We chose food because AI6 Lagos is currently working on a project called #ChowNet. Bi-Monthly AI Saturdays While awaiting the next global cohort set to resume in August, we ran a bi-monthly cohort from May till June 2018 where we went through Stanford CS20 Introduction to TensorFlow and Introduction to PyTorch. Second Lagos AI Hackathon In July, we co-organized a second edition of Lagos AI hackathon with Women in Machine Learning and Data Science (WiMLDS) and InstaDeep, an African AI company. This hackathon was supported by Facebook & Africa’s talking and it held on 21st July, 2018 with a team of 3–4 and 10 groups in total. The task was to build a model that predicts the hourly taxi ride demand in New York City based on Population Census Blocks. We had a baseline that served as an extra incentive, to be beaten. The competition started at 10.40 am and ended at 6pm with a one hour break in between for food and drinks. The winning team, AI6Lagos students got an internship opportunity at InstaDeep while the first runner-up team and second runner-up team got Google Home devices and a Bluetooth hands-free respectively. This hackathon imbued Karim Beguir, InstaDeep’s CEO with more optimism that indeed, Africa has the talents needed to build cutting-edge AI models. winning team — InstaDeepers _^_AI Saturdays Lagos — Second Cohort In August, AI6’s registration for the second cohort rolled out. We kick-started our second cohort on August 11th 2018 and ran 2 different classes for beginners and Intermediate students. For the beginners’ class, we dived deep into Machine Learning using Coursera Machine Learning course by Andrew Ng. We walked through the theories and programming assignments with a followed up group projects on MNIST and Fashion MNIST. All resources and programming assignments are documented on our Github repository for individual or community use. For the intermediate track, we had remote instructors walk us through different aspects of Artificial Intelligence. We had the following leading AI Practitioners/Researchers join us remotely and in person to take a session on any topic of their interest. Co-founder of Black-in-ai and Research Scientist at Google AI, Dr. Timnit Gebru was our first remote speaker and she gave a session on Vision and Ethics using her research project with Joy Buolamwini on Gender Shades. Professor of Machine Learning, Prof. Neil Lawrence remotely gave a tutorial on Probabilistic Machine Learning Director of School of AI, Siraj Raval hung out with us remotely to answer 21 questions we had for him. Check it out. Lecturer at Nelson Mandela African Institute of Science and Technology, Dr. Dina Machuve gave a remote tutorial session on Bio-Informatics: A Datascience Perspective Google Dev Expert in Machine Learning, Robert John gave a session on Tensorflow Estimator API Emeritus Professor of computer science at Oregon State University, Prof. Tom Dietterich gave a remote session on Anomaly Detection: Building Robust Machine Learning Algorithms Principal Data scientist at Arm, Dr. Damon Civin gave a remote session on Deploying ML Models in the Wild Co-founder of Black-in-ai and PhD candidate at Cornell, Rediet Abebe gave a remote session on Using Search Queries to Understand Health Information Needs in Africa. Co-founder of conversational AI company — Rasa, Dr. Alan Nichol gave a remote session on How to Build Conversational “Level 3” AI Bot AI Resident at Google AI and Founder of Delta Analytics, Sara Hooker gave a remote session on Beyond Accuracy Data Scientist (Executive) Ernst & Young, Allen Akinkunle gave a remote session on Survival Analysis AI Saturdays Lead (Lagos & Kigali) and CMU Africa Masters Student, Femi Azeez gave a remote session on Applying to CMU Africa Co-Founder DeepQuestAI, John Olafenwa gave a session on Convolutional Neural Networks Machine Learning Engineer, Lekan Wahab gave a session on Debugging Classifiers Assistant Professor at Boston University School of Public Health, Prof. Elaine Nsoesie gave a session on Data Visualization Our Heroes 🙂 Thank you.HackExpo — Into the future In November, we co-organized HackExpo2018 with DeepQuest AI, an AI Startup on a mission to advance and democratize Artificial Intelligence and make it accessible. HackExpo was a 2-days event, a Hackathon and Exposition sponsored by Intel, Facebook and InstaDeep with about 150 attendees across both days. The Exposition was a demonstration of cutting-edge technology in the field of Artificial Intelligence and a series of intelligent discussions. https://medium.com/media/316533d892c7850aa8712b0147f32c30/hrefAI6Lagos members accounted for over 75% of the exposition attendees and over 90% of the hackathon participants. The main hackathon was preceded with a kaggle competition on Fashion MNIST dataset. The challenge was on road traffic congestion and accidents which stemmed from the prevalence of traffic congestion on our Lagos roads and traffic issues as a larger pandemic in Nigeria as a whole. The participants were presented with 4000 traffic images datasets to be classified into four categories — Sparse Traffic, Dense Traffic, Accident and Vehicle on Fire. A benchmark score was set to be a threshold but the first submission rendered it a relic. By the end of the competition, the benchmark score was only better than one group’s submission. The members of the top 3 winning teams comprised mostly of AI6 Lagos students and this goes a long way to show the effort and dedication they’ve put into honing their skills since the start of the cohorts. A well-deserved victory. ChowNet In our effort to contribute to the data collection process in Africa, we started a community project called ChowNet which was inspired by ImageNet dataset. The motivation for ChowNet is to build an image repository for African Local food starting with Nigeria. I remotely presented this project as a poster at black-in-ai 2018 workshop which was co-located with NeurIPS at Palais des Congrès de Montréal on December 7th 2018. linkConclusion It’s been an amazing journey co-running AI6Lagos with amazing colleagues and friends. We’ve put a lot of effort into our work because we believe that we could truly democratize Artificial Intelligence education by creating a community which helps enable studying, researching and building AI products for our ecosystem and beyond. With the help of the community and a lot of personal effort and hard-work, our members have gone on to secure their dream job, won several hackathons, switched to AI-related careers, ran AI hubs, inspired other AI6 communities across the country and even kick-started AI6 in other cities in Nigeria. Over the two cycles, we have had over 150 participants and being in the forefront of the Artificial Intelligence movement in Nigeria, we have gathered quite a huge number of AI enthusiasts with over 1.2k members on Meetup, 1.6k followers on Twitter and experts in the field at home and abroad. AI6 Lagos is fast becoming a reference point in the Nigerian tech-space as far as Artificial Intelligence and Data Science is concerned. The AI ecosystem in Nigeria is growing so fast with communities like ours and organizations like DataScience Nigeria, a registered US charity/Nigerian non-profit which runs an end-to-end AI knowledge and application ecosystem that has gained international mentions taking the lead and doing fantastic job showcasing AI talents in Nigeria to the whole world. More AI6 sisters communities have sprung up across the country. We have a vibrant and active AI6 community in the state capital Abuja where we have over 30 members who have gone on to start working on interesting projects like sound detection for security, parking lot management system using number plate detection, research on skin infection classification etc. We also have active AI6 communities in other parts of Nigeria like Ibadan, Ogbomosho, Abeokuta, and Jos. With companies like InstaDeep, a Pan-Africa leading AI Startups with offices in London, Paris, Kenya, Tunisia and now, Nigeria investing heavily in AI talents in Africa, the work that we do at AI6 cannot be over-emphasized. Concluding with the words of Prof. Moustapha Cisse in his recent article titled “Look to Africa to Advance Artificial Intelligence” , AI is profoundly changing societies and its revolution offers a chance to improve life without opening up or exacerbating inequalities but this will require widening out the location in which AI is done and in order to do this we need a Pan-African Strategy: a set of ambitious goals for AI Education, Research and Development and Industrialization. We are extremely excited about the future of AI in Nigeria and the role we play in making AI Education a reality for all. Sponsors and Partners’ AI6Lagos couldn’t have happened without our proud sponsors and partners to whom we are immensely grateful. Intel is a multinational and technology company. Intel’s vision for AI, specifically DL, in 2019 and 2020 involves ushering it out of the early experimental age and onto just about every physical object in the world. Intel wants its hardware in the hands of researchers, built into gadgets and wearables, and powering corporate and developer needs. Intel has been our biggest sponsor since we started AI6 Lagos and we’re very grateful. Thanks to the amazing Allela Roy, Wendy Boswell and colleagues for their continuous support. Facebook Dev Circle is a community of developers that connects other local developers to collaborate, learn, and code. We were able to secure sponsorship from fb , thanks to Innocent Amadi, Lagos FB DevC community lead. Vesper Ng is a platform that uses Machine Learning to recommend affordable housing to people. Vesper sponsored our internet throughout the cohorts and are one of our oldest sponsors. Thank you Dolapo Omidire. Google is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing, software, and hardware. Google believes AI will meaningfully improve people’s lives and that the biggest impact will come when everyone can access — we couldn’t agree more. We were able to secure sponsorship for our second cohort from Google Developers, thanks to Aniedi Udo-Obong. Univelcity is a tech academy designed to accelerate the growth of tech talent in Africa by offering accelerated and immersive courses. We partner with univelcity from time to time to hold our classes. Thanks to Joseph Agunbiade who never hesitated for once to have us in his building at a ridiculously subsidized rate. Brainiacs is an end to end technology solution provider. They provide services in training and enhancement for children and young adults (creativity pack), application and systems development and support institutions in research and analytics. We partnered with Brainiacs to hold our bi-monthly cohorts in their office space at no cost. Thanks Mr Musa Muhammad and team. Cisco is the worldwide leader in IT and networking. They help companies of all sizes transform how people connect, communicate, and collaborate. Thanks to Aisha Bello, we were able to hold part of our classes for the second cohort in Cisco Nigeria’s office for free. We couldn’t have had such an amazing experience running this community without each and every person who’s helped us in one way or the other — to whom, we are very grateful. We are looking forward to partnering with more organizations/communities in the nearest future. One of our major challenges so far has been getting a stable venue to hold our classes, since we hold these classes for 16 consecutive Saturdays, it becomes rather daunting to get any one venue without some form of payment. Securing sponsorship in this area would be highly appreciated. If you would like to partner or sponsor us, please send an email to firstname.lastname@example.org What‘s up for 2019? 1. ChowNet: Exciting news for 2019 is that InstaDeep will be sponsoring our project. This sponsorship will go a long way into helping us build better incentive scheme into our data collection app and allow us to move faster in implementation. Our goal which is to collect about 200–500 images (per class) of local African dishes in different part of Africa. 2. Website: Our new and improved website is still in the works. Our team of volunteers are working tirelessly to make sure our chapter has an online presence. We can’t wait to share all that we do with the world. 3. In — Class Competitions: According to the feedback we received from participants of the concluded cohorts, one area that we need to put in more effort is in competitions. We have acceded their requests and will be introducing in-class competitions in the next cohort. The idea is to have a competition that will be on Kaggle run throughout the cohort. The problems will generally be tailored towards materials introduced in classes but there would be various competitions requiring skills not yet taught in class. We believe this will help participants anticipate and try out stuffs ahead of the class and then eventually learn how to better handle it once taught. 4. Poster Presentations: Part of our goal at AI6 is to foster environment where critical thinking and ground-breaking research can happen. We will be having poster presentation at the end of each cohorts by members who have been working on any form of research and possibly connect them with experts in their field of interest. IndabaXNigeria is happening in March 2019 and this form of class exercise could help our members prepare for conferences like such. 5. Personal and Group Projects: We intend to continue with our model of having group projects mid-cohort however, we will also encourage personal projects from our members. 6. Classes: For this next cycle, we plan to hold two separate classes for beginners/Machine Learning students and Intermediate/Deep Learning students. Intermediate Students with mature exposure would be encouraged to start implementing research papers with any framework of their choice. With each person discussing what they are working on through presentations to other students. Registration for the 3rd cohort is currently ongoing — we’re looking forward to having you. Please Note: Every participant is required to register for the cohort and registration will remain open for the first 5weeks of the cohort. Registered members who attend 60% of classes and complete all projects will be awarded certificates of participation at the end of the cohort. ❤ ❤Many thanks to ‘Tayo Jabar, Olalekan Olapeju, George Igwegbe, Adetunji Adetola, Orevaoghene Ahia, Simon Ubi, Lawrence Francis and Stanley Obumneme Dukor for taking a big chunk of their time and energy to run this community with Femi and I ❤ ❤ Bridging The Artificial Intelligence (AI) Gaps With AI6 was originally published in AI Saturdays on Medium, where people are continuing the conversation by highlighting and responding to this story.
- AI Saturdays Damascus… Working on a data set with models built from scratchby Abdullah Al-Saidi on October 3, 2018 at 1:25 am
AI6 Damascus is going on in the Incubator of Communication Technology(ICT) where AI pensioners meet, talk and learn about deep learning every Saturday. In our first session, we discussed the current state of artificial intelligence industry in our country and how rarely it is to find a real position in AI. That happens because of the impact of conflict on business where Syria is still in a chaos. That certainly prevent business enterprises from being involved in such an emerging field like Artificial Intelligence. In addition, few people in AI-related disciplines who take build their application based on AI approaches by themselves and it’s not an easy thing to do. So we set a goal to create a Syrian AI community that aims to help startups implement any idea locally which make a big growth in this field and revive it again. Recently, we’ve finished the first two courses of Andrew Ng’s deep learning specialization on Coursera. We were working collectively on studying the courses’ materials throughout the week to discus them in our meeting on Saturdays. We study, learn, discuss and come up with the ideal solutions together to apply the course’s concepts with a much more practical point of view. Our Meeting In a Minute We started with the basics by applying Logistic Regression and Neural Networks from scratch and then we downloaded a data set from Kaggle called ‘Social Network Analysis’ you can find it here data overviewWe did some pre-processing like encoding the Gender feature and scaling all features with Min-Max Scaler from scikit-learn We have implemented the models without scaling and the cost function always returns a NaN(Not a Number) because the of the Log so it was very necessary here to scale the features. we’ve split the data also in two ways, one is by using arrays slicing and the other by using train_test_split method in scikit-learn Finally, we adjusted the shape of feature to be [#features, # instances]and also the ground truth vector ‘Purchased’ because Prof. Andrew supposed in his Logistic Regression Implementation supposed that. After implementing the Logistic Regression we applied the Neural Network model on the data in the same way. P.S Anyone can find our full Jupyter Notebooks for the second and third session in these links: Session 2Session 3 AI Saturdays Damascus… Working on a data set with models built from scratch was originally published in AI Saturdays on Medium, where people are continuing the conversation by highlighting and responding to this story.
- AI Saturdays Bangalore Chapter: Everything about Convolutions (Week 6)by Suraj Bonagiri on September 28, 2018 at 10:01 am
After 5 Saturdays of covering the basic concepts required for understanding any modern deep architecture, it was time. Week 6 was all about the convolutions; from basic concepts like stride and padding to building our own deep architecture while reasoning on why a specific hyperparameter was chosen. This week, we covered the contents of Fast.ai Lesson 3. Apart from the solid introduction to convolutions, Lesson 3 also talks about multi-label classification and how one can use Fastdotai library on a Kaggle competition namely, Planet: Understanding Amazon from the space. So, the post-lunch session was all about understanding the code and running the Kaggle kernel. We all know Convolution Neural networks (the second generation neural networks) opened the Pandora box that helped us solve all those problems easily which were once thought to be very hard to solve. So, what is this Convolution Neural network(CNN)? In layman terms, it’s a stack of convolution layers with a mix of other types of layers which we will look into it soon. A single convolution layer looks like follows: The blue grid is the input to the convolution layer. The darker area that’s sliding over the input is called the filter or kernel. When the filter/kernel is slid over the input in a particular way, it produces the green grid known as the feature map. This act of producing a feature map using a kernel and an input image is called convolution. Let’s take a look at the math on how a feature map is produced. When a kernel is slid over the input, it multiplies the corresponding values and sums them up. This is done until the kernel reaches the end of the image. When all the resulting outputs of the convolutions are arranged as shown, we get the feature map. In CNNs, the convolutions layers are stacked one upon other. So, the produced feature map at one layer becomes the input to the next convolution layer. The filter can be slid on an input in many ways. So, which is the right way? Well, there is no one right way as it is a hyperparameter which can be chosen by us or change them and observe whichever gives the best performance. Mainly, there are 3 hyperparameters when it comes to convolution layer: Filter/Kernel size, Stride and Padding. Filter/Kernel size: It is the dimension of the filter. For example, when size=3, it means the kernel dimension is 3×3. We choose the kernel size based on the characteristics of the input. If we want to have a larger receptive field, ie., the kernel should be able to look at a bigger chunk of the input image, we go for a higher odd number sized filter. One might wonder why is the size of the filter/kernel odd. That is because convolution is all about getting the correlation among a central pixel and it’s neighboring pixels. So, if we take an even sized kernel, there will not be any central pixel and we would miss the intuition/point of convolution. Mathematically, it will work but conceptually, it is not intuitive. Stride: It’s the number of steps the filter/kernel is moved during convolution. In images, the number of steps is the number of pixels. Can you spot the difference? So, why do we want different strides? If the data is sparsely located in the image, the filter/kernel can slide over the input quickly as much information isn’t present anyways. Similarly, if the information is dense, a small stride is desired. Usually, stride=1 is the default. Do you know another reason? Post it in the comment section. 🙂 Left : A bigger stride can be used | Right : A smaller stride should be usedPadding: It means adding extra pixels around the boundary or pad the image. We use padding for many reasons. Here are a few: When a series of convolutions are performed, the produced feature map keeps decreasing. At one point, we will not be able to apply any more convolutions and the network wouldn’t be as deep as required. To avoid this situation, we pad the produced feature map which will result in a deeper network. Also, when convolutions are applied, the information in the borders is lost. To avoid this, one can pad the input. Pooling layers: There are also other non convolution layers like pooling layer. As the name suggests, this layer picks a value from a pool of values. The commonly used pooling layer is maxpooling. Given the pooling filter size, it chooses the max value from the pool of the specified filter size. On the other hand, average pooling outputs the average of the pool. And minpool is the opposite of maxpool. So, the question arises of when to use what. A superb example from fast.ai course on difference between maxpooling and average pooling: In classifying cats vs. dogs, averaging over the image tells us “how doggy or catty is this image overall.” Since a large part of these images are all dogs and cats, this would make sense. If you were using max pooling, you are simply finding “the most doggy or catty” part of the image, which probably isn’t as useful. However, this may be useful in something like the fisheries competition, where the fish occupy only a small part of the picture. Activation functions: As Jeremy mentions in his lectures, each activaion function has it’s own charecteristics. Depending on the kind of our necessity, we choose the activation. Let’s consider the case of single label prediction v/s multi-label prediction. In a problem of predicting only a single value, we would be using softmax activation because softmax kind of highlights the max value while supressing lower values. This way, it created a better margin between the max predicted and other predicted values while maintaing the sum of all prediction equal to 1. But in the case multi-label prediction, there might be a case where multiple predictions are correct. In that case, we cannot use softmax because softmax would supress other predictions while keeping only the highest one. In this case, we go for sigmoid activation for each prediction that squashes the values between 0 and 1 and therefore denoting the probability of the prediction. Building an Architecture: While building the architecture, we need to have a better idea mainly on the number and dimension of feature maps produced. Two simple rules one needs to keep in mind are these: 1. Number of filters used on an input is equal to the number of feature maps produced 2. Dimension of the produced feature map is (N + 2P – F)/S + 1 where, N : input size; P : Padding; F : Filter size; S : Stride. For example, is the input image is 28×28 and without any padding and a 3×3 filter with a stride of 1, we get a feature map of (28 + 2*0 — 3)/1 + 1 = 26. Let’s take the same above example but this time, let’s take it with padding. If we calculate, we’d be getting the size of feature map equal to the input size. Hence, that proves the first point of why use padding. Till now, we have looked at individual components of a deep CNN. Let’s look at the whole architecture and understand while applying the knowledge we gained till now on what’s going in the below architecture: Starting from the lefthand side, we have a convolution layer C1 that uses 5×5 convolutions on input size of 55×55 that resulted in a feature map os size 27×27. We can see that the depth of produced feature maps is 256. That means, 256 filters were used in C1. Same goes on for C2 that produces 384 feaure maps of 13×13 dimension. But from layer C3 to C4, the size of feature map produced is equal to the input. That means, padding was used there. Fastforwarding to the last layers, FC means fully connected. Here, the network uses 3 FC layers. As we know, FC layers are very good function approximators but they use individual weights making it harder to train. Whereas CNNs share their weights and are easier to train.(A small set of weights, eg: 3×3 filter uses 9 weights, are used for the whole image hence sharing the 9 weights. But in the case of FCNNs, each pixel of the input gets its own weight therefore becoming harder to train). So, a combination of CNNs and FCNNs are used here to make use of the good parts of both type of networks. The number of nodes in the last layer of the network is equal to the number of labels. The network is trained in such a way that corresponding node of that label has higher value compared to other nodes. Post-Lunch Session: In the post-lunch session, we understood and executed this Kaggle notebook, which has been forked and modified a little from William Horton’s notebook. One point to highlight here is the use of sigmoid activation function. As this is a multi-label clasification problem, use of sigmoid is prefrered over softmax. Participants response: Participants from various backgrounds have participated in this session. The feedback from them was impressive. We mentors are encouraged by this response. We are glad that the participants found this session useful. Out of all the participants, this is the consolidated feedback of 22 of them who gave the feedback.It is just wonderful to teach and share your experiences with others. Thank you AI Saturdays for this amazing platform. Looking forward for the future sessions. Alvidha! AI Saturdays Bangalore Chapter: Everything about Convolutions (Week 6) was originally published in AI Saturdays on Medium, where people are continuing the conversation by highlighting and responding to this story.
- Leaders Appearing in Week 3by Rich Everts on September 24, 2018 at 7:51 am
The Red and Black Belts covering the first week of the Fast.ai courseWeek 3 of AI Saturdays took place this past weekend in Lancaster, PA and always new and interesting things taking place. The White and Yellow belts improved on their Python skills covering matplotlib, dictionaries, boolean logic, loops, and the basics of working in jupyter notebook. The Red and Black belts covered their homework of coding a perceptron from scratch using numpy, with some great results from the team. They then moved on to their first week of coding the Fast.ai course, covering the setup of their environments and the initial steps of CNNs with the Cats and Dogs example. Before you know it, they’ll be able to create a not-hot-dog detector! There were no major surprises from sororities like last week, and the teams focused and got through their work pretty well. One of the great things about week 3 is that leaders are starting to appear, people who are clearly going through the homework, putting in the effort, and seeing some early results. We hope to begin introducing you to some of the next week! It may only be the first few innings, but we can already see the effects of the course on the participants. Next week, Fast.ai moves forward for the Red and Black belts, and the White and Yellow belts get into the Google Crash Course on Machine Learning. Now we’re getting to the good stuff! See you next Saturday! Leaders Appearing in Week 3 was originally published in AI Saturdays on Medium, where people are continuing the conversation by highlighting and responding to this story.
- AI Saturdays Bangalore Chapter — Week 2 Reflections.by Naren D on September 24, 2018 at 7:37 am
After kicking off to an amazing start with the winter cycle of 2018 AI Saturdays, it couldn’t have gotten any better for our next session. With participation from over 60 people from different walks of their careers from students to seasoned tech professionals trying to understand the subject of AI and over 100 people joining over the live stream, it was a Saturday filled with loads of enthusiasm towards this path breaking technology. In this blog, i will cover the topics that were discussed in detail in the session and also share some resources for your learning progress to these concepts. In the previous session, we had discussed about the workings of a neural network, basic variant of gradient descent algorithm for optimization process later extending it to back propagation algorithm and various activation functions used in the field currently, which mostly constitutes the first course of deeplearning.ai specialization. In this week, we delved deeper into tuning the deep learning models using regularization techniques, discussed the variants of gradient descent algorithms and introduced a few sophisticated algorithms like RMSProp and ADAM with their thorough explanations. We then went on to discuss how techniques like batch normalization will help us attain the minima much faster. We later encountered a way to check the fidelity of backpropagation in our network and how to overcome the vanishing/exploding gradient problem partially using weight initialization. In-depth guides to all these topics have been given below. Topics discussed: 1. L1 and L2 regularization 2. Dropout regularization and early stopping 3. Gradient Checking. 4. Variants of Gradient descent and other optimization algorithms. 5. Batch Normalization. 6. Softmax classifier. All the above components together make a whole deep learning model along with another few elements to it which have been discussed in the previous two sessions. The participants have now gained a valuable experience and insight towards the various components in a neural network model and are capable of building a well generalized deep learning model. We will now head towards solving real world scenario problems with hands-on coding from upcoming meetups after getting an in-depth theoretical understanding of the details from the past two sessions that we had conducted. The enthusiasm shown by the Bangalore members has been ecstatic and me along with the other ambassadors of the Bangalore chapter are thrilled to take it forward to newer heights from here. Smartbeings.aiAnd finally we would like to thank smartbeings.ai for being a great host:) In our upcoming session we are going to cover the third course from deeplearning.ai and introduce you guys to the amazing PyTorch framework which will be held in Nvidia office. After which we will dive deep into building some complex deep learning models to solve problems and see their amazing capabilities in the upcoming sessions. To attend the next session, fill out the form here. Assignments for the sessions conducted till date can be found here Sign up here to attend next meetups. All the discussed materials related to the meetup can be found on Github repo. Follow AISaturdays Bangalore on twitter. AI Saturdays Bangalore Chapter — Week 2 Reflections. was originally published in AI Saturdays on Medium, where people are continuing the conversation by highlighting and responding to this story.