Articles, Blog

Machine Learning with Oracle

November 7, 2019


Hello and welcome to “Machine Learning with Oracle”. In this session we’re going to build our very first machine learning model – ready to use in an intelligent application. We’re going to go through all the basic steps that you need to take, so that afterwards you will be able to make your own machine learning applications. You can do that with Oracle either in the cloud or on-premises. So the choice really is yours. My name is Jeroen Kloosterman. I’m a business development manager for business analytics and for big data. Also here behind the scenes there’s Javier de La Torre. He’s a solution consultant for all of Oracle’s tech products. Together we we are going to be moderating and answering all of the questions from you coming in. So please post them here in the questions tab. We also have some time at the end to take some questions and to answer them in a bit more detail. We’re going to start the session with an overview of where machine learning is used today. To be able to follow this session you actually don’t require any prior knowledge on machine learning, because I’m going to go through all the theory that you need to be able to apply this type of technology. Then we’re going to spend -ofcourse- most of our time on a demonstration. So you can see in practice which steps that you need to take to make machine learning work for you. I’m going to close off with some practical tips on how to get started yourself. For example where do you get a development environment. But let’s start with an overview of machine learning in Oracle. Now, first of all, you might not realize it, but Machine learning is already built into a variety of standard business applications. Take the customer experience (CX), or HCM or ERP applications. If you’re using HCM cloud, for example, machine learning is built in to intelligently match candidates with open positions by going through the text in their CVs. Also if we take it a step “down”, if I may say it like that, to infrastructure management. IT infrastructure management. Machine learning is already used to identify causes for system issues. By analyzing the text in all the logs of all the applications. The idea ofcourse is that machine learning is just integrated into those applications. You don’t have to enable anything. You don’t have to set anything up. It’s just there where it makes sense. Then we have a very different use of machine learning and that is within the Business Intelligence platforms. So here we’re talking about things such as such as dashboards or visualisations. So you can use, for example, machine learning not just to plot historical revenue, but also visually plot predicted future revenue. if you think about self-service data visualization, you can have a business user using machine learning tools , such as clustering and trending, available to him. There’s a third category which I will call “platform”. Here we create our own machine learning processes with the objective of integrating them into our own custom applications. So really we’re talking about application development. This is where we’re going to focus our attention today. But even within this we have to make a further distinction. And we’re going to do that by looking at the data source that these machine learning applications are based on. So on the right hand side we see machine learning that depends on media sources. Think about images , video and audio. Take your mobile phone with a personal assistant and with speech recognition. The self-driving car that interprets the video feed coming from the cameras on the car. Or even in the medical world where machine learning is used to diagnose images coming from, for example, MRI scans. So all of these are machine learning cases that use media as input, as their data source. On the other hand on the left hand side we see a very different type of cases. These are the corporate data cases. The data that’s typically stored in databases. So for example, if you’re in sales or in marketing: identifying the best potential targets for a new campaign, machine learning is a very good fit for that. Or if you have some kind of service that people can subscribe to, like in telecom for example, machine learning can predict which customers are likely to leave the service so you can take some kind of preventive action. But you can also use machine learning to identify possible fraudulent transactions. That can be very useful if you’re a bank or if you have some kind of exchange platform. So these very different types of data sources are also handled on typically two different technological platforms. We have the big data platforms that are very suitable for media and such. And then the Oracle database is really suitable for that corporate data, and machine learning on corporate data. The important thing to understand is that with Oracle technology you can do both of these. But for today I want to focus on the corporate data. On the database cases, because very often you’ll see that that is where the quickest wins are. You already see some product names in here. So we will focus in particular on a thing called Advanced Analytics, which is an option to the Oracle database. And on the other side of the spectrum, not covered in this session though, we have things such as the Oracle Big Data Connectors and also to very soon to be released AI Platform Cloud Service. Before we start building our own machine learning model, let’s first have a look at some theory that you just have to understand before you can apply this type of technology. So what is machine learning? Let’s first have a look at traditional programming. If you want the computer to reach a certain objective you will write a set of rules -as a human being- and the computer will then simply reach the objective by just simply executing those steps. But with machine learning you’re no longer going to explicitly program the computer what to do. Instead, you’re going to let the algorithm, the machine learning algorithm, find out what are the best steps to reach the objective that you have set. If we take an example, imagine we want to sell a certain type of product. So you want to select the top 20 percent of customers most likely to buy that product. With traditional programming, how would you do that. Well of course you would have your input list with your customers. And then you would create some kind of set of rules of which you think that they will correcly identify the top 20 percent. So those rules, in the form of IF THEN ELSE, would be involving things such as age or the gender of the customer or maybe whether a customer bought a product in the last month. However, how can you know that those rules that you write are really the right ones? Of course you, the human component, you will look at the historical data. Right? But to identify the right set of rules is going to be quite difficult. And even if you do get the right rules how can you maintain it? Because customer behavior will change over time. With machine learning on the other hand we’re not going to explicitly create that set of rules anymore but we let the computer find out how to go from input to output. We’re going to do that by training a model, and a model is basically an understanding of what is the relationship between input and output. So during the training phase we create that model. And in this case we will input historical sales transactions to let that model learn. And then in a later phase in the prediction phase we’re going to use that to predict the future behavior of customers. So there are different algorithms to solve different types of problems. So let’s have a look at the most important ones: Supervised learning and on supervised learning. We call machine learning supervised when we have example data to teach the model. It’s really very simple. You basically show the algorithm examples of input and the correct output corresponding with that input. So then the algorithm basically infers the relationships between input and output. So if we go back to the example: If we’re trying to select those customers most likely to buy a product. Imagine we have historical data in which you can see which of our customers were offered this product already in the past. And also their response to that offer. Whether they’d bought, yes or no. Now with that of course we have excellent examples to train and model with. So you don’t have to explicitly program anymore. So what does that mean for you as a developer or as a business user? Well really, instead of focusing your time on programming you’re going to spend a lot more time on selecting the right data or selecting the right features that you believe have some kind of predictive value. Because of course those features, that data, that needs to go into the algorithm to learn from. So coming back to supervised learning. There are two different types. One is called classification and the order is called regression. Classification is all about predicting categories / distinct values. And that’s also what we do in here, because we are predicting “buy” or “no buy”. Two distinct values. On the other hand, if we’re trying to predict a value, we call it regression. So, for example, predict customer lifetime value (LTV) in Euros. Now of course we’re going to use “under the hood” different algorithms for classification and regression. But just remember, that in principle all the steps are the same. Both classification and regression are forms of supervised learning. Unsupervised learning is different because there we don’t have a label. We don’t have desired example output data so we cannot train the model in the same way. So instead what we do is just give a set of input data. And we ask the algorithm – find out what are the hidden patterns in this data. We really don’t give many more instructions. So the most common cause of these is clustering. In this example we see a list of customers. And now we ask – create a certain amount of logical clusters in here. Now, this may be the end result. Having these clusters, I may be happy with that. But very often what you see is that the result of the clustering will be the input for another machine learning task. For example another classification task. So very often what you see in machine learning is that you have processes. You’re using several algorithms in steps to reach a certain end objective. Now we’re actually going to create such a machine learning model. We’re going to do that directly in the Oracle database. As mentioned we are going to work with an option to the Oracle database called Advance Analytics. And that lives right there inside of that Oracle database. Very close to your data. And the fact that it’s so close to your data makes it extremely easy for us to develop the model. It also makes it very easy for us to put it into production, because the application also run on top of that database. So the application can simply pick up the results of a prediction process very easily from that same database. So we don’t have to move around any data. What we are going to build is the example that I already mentioned earlier. So we’re going to apply machine learning to a sales scenario and we’re going to predict which customers are most likely to buy a certain product. So let’s go to the demo. Our goal is to make an existing application intelligent. So I’ve already prepared this demo by first creating a very simple customer service application. Let me just explain the background quickly. How this will be used in practice. So, imagine the customer calls us. Imagine his name is Bryan. The service employee looks up the name. And then he can simply go into the details. The idea is that they can then have a conversation with this customer about whatever service issue they have. And here we see all the interactions we’ve had with that customer in the past. My goal is to upsell something to this customer. Now I want to use machine learning and alert my employee whenever we have a customer on the screen that is a potential candidate for this new product. So I’d probably add some kind of field right here saying Bryan is a good candidate for my insurance products – the product I’m trying to sell. By the way I’ve built this application with Oracle APEX. I’m not sure that’s familiar to you guys. It doesn’t really matter, the application technology here. It could have been Java as well for example. The important thing is that there’s an Oracle database below it. Now before we move on let me just zoom out for a few seconds and tell you about the two developer options that there are for Advanced Analytics in the database. The first one is SQL Developer. This will be familiar for the people doing development with the Oracle database. It’s just basically a local tool that allows you to work with the database. And there’s a plug in for SQL developer that allows you to do these things as you see here. To develop flows of machine learning. For example to create models and to do scoring and prediction, et cetera. The second option is R studio. Now “R”, for the people who don’t know, is a standard for statistical computing. And we also see that a lot of people nowadays coming from university have “R” in their skill set. So it’s good to know that that’s also an option here. But today we’re going to look at this first way of development with SQL developer, because you will see that this really makes your life very easy. It’s intuitive and it’s visual. So let’s jump right into it. So here we see SQL Developer open. And actually what we see here is a process -the final process- that we’re going to develop now, step by step. I’ll start at the beginning. And these processes they also get stored in the database. And because of that you have to first create that schema and there’re a few other initial steps you need to do, which are of course described. And I will share that information at the end of the session. So you know which steps to take. Let’s just go to our database connection. This is the database that holds the data for our application. In fact that data of those customers is stored in this table: INSUR_CUST_LTV_SAMPLE. Here we see the most important customer information. First name and last name, some demographic information. The interesting field for me is this one: “BUY_INSURANCE”. That is the field that tells me whether they bought insurance in the past. And insurance is the product that I want to be selling. So this is my “target” variable and all these other fields, the “features”, I suspect they have some kind of influence on whether somebody gets an insurance. This situation calls for a supervised learning machine learning algorithm, because we have the driving features and the target variable that we can use to check the performance of the model. Moreover this is a classifier. So a classification problem, because this column “BUY_INSURANCE” has two very distinct values. No or yes. Now let’s go into the Data Miner. This is the plugin in SQL Developer. We’re going to create this workflow from scratch. Let me just open this empty workflow. Of course everything starts with the data. I’m going to look up that same data source again that we were just looking at. I’m just going to drag this over to the canvas and I’m going to choose that same table we were looking at earlier. Here it is. In this case I’m just going to select all of the columns and not do anything special. Before we start building the model I’d like to verify the data that’s in that table. I’d like to get a feeling about how values are distributed for example. Because there might even be a few columns that don’t make any sense to be included in the machine learning process. That’s done with the Explorer data option which you find here. We just need to create a little flow between these two things so that explore data knows where the input data comes from. Let me just finish the configuration of this and let’s see the end result. Let’s start with the target variable “BUY_INSURANCE”. We see that a bit more than 25 percent of the population has bought insurance already. And of course we’re looking for the secret behind this group and then try to find more people that will buy it. But let’s have a look for example at “age”. In yellow you see the people that bought and in red the people that didn’t buy and in most of these age groups it seems to be about a quarter – more or less. But looking for example of this group – between 59 years and 67 years old. The relationship seems to be much smaller. So age is of some kind of influence on “BUY_INSURANCE”. These types of distribution and these statistics that help me get a feeling. And for example I could eliminate a few of the attributes already that I don’t even want to include in the machine learning. Now we move on to actually creating the model. Let’s move this explorer data to the side. Now we go to the actual different type of algorithms. We see them here. So the one we we’re looking for of course is classification. But you also see clustering and regression for example as I mentioned earlier. Let’s take the classification node and configure it so that the data comes from this table. The configuration opens automatically. What’s really interesting here is that it shows you four different algorithms for classification by default. Generalized Linear Models, Support Vector Machines, Decision Trees and Naive Bayes model. So the good thing about this is that you don’t have to try these different classification models one by one, because they will be evaluated all parallel. And then you can simply compare which one is the best and choose the best one. We haven’t gone into the details of these different types of classification algorithms. And I think for now it’s also not so important. Just consider them as black boxes. They have different ways of sort of finding the relationships between the input and the target variable. Now we have to configure the classifier. So let’s start with the input. And what you see here is of course a direct result of the line that we drew between the data source and the classifier. So the features are all here. We set the target variable which is “BUY_INSURANCE”. The last thing we need to do is set something that uniquely identifies each of the records. In our case that is the customer ID. We have one record per customer. Now we’re ready to run the model. So let’s say OK here. And of course this is where the magic is happening. Now on the right bottom you can see the progress of the model build. So this is where each of the learning algorithms has its own unique way of building this model, of building its understanding of how to go from the features to the target variable. This step can take some time depending on the size of the table, the number of records, the number of attributes. While this is building. Let me explain to you how we are going to validate the model to see how correct it actually is. Here we see that we are going to reserve 40 percent of the records in our data set to test the model. This means that for each of those records we are going to take the features, we are going to feed it into the model and let it predict the outcome of “BUY_INSURANCE”. And of course we also have the actual value of “BUY_INSURANCE” so we can come up with a percentage of correctly predicted records. This also means that we are going to use 60 percent of the dataset to actually train the model. And you may ask yourself – why don’t we use 100 percent of the data set. Because after all more data must result in a more accurate model, right? Well, although that in itself is true. There’s something even more important that you should understand, and that is that a model can only really be tested if you input unseen data. Because imagine if you would input data that has also been trained with. There’s a chance that it will simply memorize what the right answer is. And that of course we don’t want. We want the model to learn generalized rules of how to go from input to output, and not simply memorize the individual cases. In the meantime our four models have finished training and they’ve also been tested already. So let’s have a look at the test results and compare the four different models in performance. The decision tree seems to be doing quite well here looking at the predictive confidence and also the average accuracy. Now we can take it one step deeper by looking at the performance matrix. This will look at the accuracy of the individual possible outcomes. So we can have “BUY_INSURANCE” “yes” or “no”. And this for example tells me that the decision tree gets almost 80 percent of the yes cases right. Now I’ll just zoom in on the Lift diagram. Apart from the four models that we created we also see a so-called Random model. It’s really a theoretical model that’s just taking a random guess. And of course it’s all about the difference between the performance of the models we create and that random model. We also see this theoretical ideal model. Basically a model that gets all of the predictions right. My conclusion here is that the decision tree is the best of the four models that we have just created, regardless of the population size that we choose. Many of the learning algorithms result in models that are actually black boxes. You cannot really look into them. You can only see what the outcome is. But the nice thing about the decision tree is that you can see how the decisions by the model are made. Here you see that graphically represented. I’ll just show you one example. Here we see a set of rules looking at the features of course of the input data that result in a certain decision. In this case “yes” – buy insurance. That’s the prediction of the model. So the nice thing of course of this is that you can always say why was a decision made by the model. We’re going to select the decision tree as our model of choice. So from now on we will only build the decision tree and I’m going to turn off all the other ones. We’re ready to apply the model to make predictions. We can make a prediction in two different ways. Either on a case by case basis, in this case one customer at a time, on an ad-hoc basis. And you can use a kind of SQL syntax for that. Or we can predict in batch, which means we make the predictions for all of the customers in one go, store them in a table, and have them ready for any application that might need them. We are going to score in batch and we start with the Apply node, which basically does prediction for us. “Apply” needs the input data, which in this case comes from a database view, and this database view holds all those customers again but only those ones that didn’t buy the insurance yet. The input for the prediction process are these two things: the model and the input data set that we need to score. The output of the scoring process will be another table. So let’s give that a name. And let’s say it’s called “INSUR_PRED”. Now let’s configure the prediction process. In the output table that we’re creating we’re going to have several columns. Of course the prediction itself, the value. So is it “BUY_INSURANCE” “yes” or “no”. We also will get a column saying something about the likelihood that that prediction is indeed correct. I will ignore the prediction cost for now. But of course we want this prediction for each customer. So we’ll have to indicate that the customer ID is the unique identifier for each customer. And we want that customer ID to be one of the output columns as well. So let’s add that here. And that’s it. Now we’re ready to make the predictions. So let’s run it. Again this might take a little bit of time depending on the size of the dataset. Now the prediction process has already completed. So let’s have a look at the output data. And let’s start on the right hand side. Here we have the customer ID of course. This is the predicted value coming from the decision tree. This is the probability, the likelihood that the decision is indeed correct. Of course as this data is already stored in the database it becomes really really easy to integrate this into any application that you wish. Okay. Okay. Now let’s tie it all together and go back to our customer service application. Remember we want to show a recommendation to our customer service employee when the customer he’s speaking to is a likely candidate to buy our insurance product. So let’s have a look at that page again. And here we see that this view is being used to show the customer information. Again remember we’re doing this with Oracle APEX. But this could have been any kind of application technology. We’re going to edit the definition of this view to add a column that contains our recommendation. So let’s go to the database, find the view. There it is. And let’s edit it. So here we see the current fields that are being shown on that screen. Now we’re going to join in the output table from our prediction process. I’ve already created that SQL. So let me just paste that in here. And we still of course have the original columns. We’ve added this bit of extra SQL that really does only one thing – adding one column “RECOMMENDATION”. The recommendation is only added when “yes” is predicted. So the person will buy insurance and there’s a probability of at least 65 percent. And then we create a nice little text for our customer service employee. Let’s go and see how this works in the application. So I’m going back to APEX in this case. I’m just going to do one little trick. Typing a space here, that’s just to make sure that the page is refreshed. I’m saving it. And let’s run. So again let’s look for this guy called Bryan. Open him. We see the new field, but in this case apparently it’s not recommended to talk about the insurance product. Let’s look up somebody else, for example Marc. And there we go: Marc Harris. Now the customer service employee gets a recommendation because he is quite likely to be interested in this product. Now in this case of course we’ve waited for our customer to call us and we only started the conversation where that makes sense. But now we have this recommendation field, we could also drive it the other way: we can create an outbound sales campaign where we actively approach those customers that are potentially interested. We can just use the recommendation field for that. In this case we’re going to you just sort on it. So we sort by recommendation, and like this we very easily create our target list. This one field we’ve added to the screen, this one recommendation, can make quite an impact. In this example it lets the customer service employee work a lot more efficient and a lot smarter. And the same recommendation makes that my salespeople can also be much more focused and target those customers where they have the highest chance of success. Now from a developer point of view you’ve seen that you could use SQL Developer to create these processes and you could do that straight in the database. You didn’t have to move around any data for example. And then to make the application intelligent, to really put our predictions into production, that was also quite straightforward. Because all we had to do was retrieve the data that was already predicted, and that was just a matter of extending some SQL. So what steps can you take if you want to start with this type of development. Now first of all, for on premise case, you will require the Enterprise Edition version of the database and then the Advanced Analytics option is actually already installed with that, although you do need to license it of course. So you could already start experimentation with that in your development database if you fulfill that requirement. And if you want to completely separate play ground, not anything shared with your existing environments, what you could also do is download a VirtualBox virtual image and just run it completely separately in there. And of course this one, that is shown here that has the database and the Advanced Analytics option. For cloud there are also trials available. Just go to cloud.oracle.com, then look for the database as a service. Now what’s important to know is that you need either the high performance or the extreme performance versions, because only those have the Advanted Analytics option in there. Before we go to the questions, if you’re interested in any updates, new materials or discussions on machine learning with Oracle, please join this LinkedIn group. You can search either for the name “Machine Learning / Artificial Intelligence with Oracle”. Or you can use the direct URL that you see here. Now of course we really like to hear from you. So what did you think about this session? Is this what you expected? Now we’re going to do more of these webinars in the very near future. And we’d really like to hear your input on what kind of machine learning subject you’d like to hear about. So do you want it to be about the business intelligence aspects? Maybe you want to see “self-service” machine learning in there? Or do you want to hear about machine learning in a certain industry or anything else, please let us know and we’ll really take this into consideration very seriously. So now let’s have a look at the questions that have come in.

15 Comments

  • Reply FluidezEnChino. com November 22, 2017 at 5:26 am

    Excellent, Thanks Jeroen!!

  • Reply animatem January 12, 2018 at 1:53 am

    Perfect intro for me. Thank you.

  • Reply Sonya Mladenova March 8, 2018 at 10:20 pm

    Exsellent

  • Reply Kopano Ramaphoi March 26, 2018 at 6:24 am

    Mr Kloosterman that was incredible introduction for novice like me.

  • Reply Anil Chanda April 3, 2018 at 10:23 am

    This is one of the best webinar I have come across. Simple,practical and without jargon.

  • Reply Mehul Patel April 10, 2018 at 1:15 pm

    Excellent explanation. Much appreciated.

  • Reply dmitry buzolin May 7, 2018 at 6:56 pm

    Hi Jeroen. I used Oracle DB Developer VM as advised in this presentation from here http://www.oracle.com/technetwork/database/enterprise-edition/databaseappdev-vm-161299.html and can't see sample data in the database. Any ideas how to obtain these samples?

  • Reply Los Hermanos Med June 5, 2018 at 8:39 pm

    Great Demo

  • Reply 臧運祥 June 22, 2018 at 11:40 am

    Great vid!

  • Reply Manish Srivastava July 11, 2018 at 2:05 pm

    Excellent OML session 🙂

  • Reply M S October 1, 2018 at 7:25 am

    I would like to see ML on Oracle financial data of AR, AP & GL any possibility to get any video on it or share use cases

  • Reply rohit kumar January 23, 2019 at 5:50 am

    Great Explanation, this video clear what is Data science I also have something to share for more details
    https://goo.gl/SgC1dp

  • Reply Omprakash Maurya June 26, 2019 at 9:15 pm

    Awesome video. I am interested in ML using Python. Any good video that you can point me to?

  • Reply jeong ki Hong July 17, 2019 at 12:34 pm

    VERY NICE…Big Thanks !!

  • Reply Mani Kanta November 3, 2019 at 5:59 pm

    Good, I think u have to provide some more examples

  • Leave a Reply