Author name: admin

How-is-AI-ML-enabling-better-supply-chain-planning
Uncategorized

How Is AI/ML Enabling Better Supply Chain Planning?

Every year, supply chain disruptions cost businesses an average of $200 million. The present world situation, be it the COVID-19 pandemic or the Suez Canal choking in 2021 to the ongoing political war, has exponentially increased the risks of these disruptions. Market volatility, supplier inconsistency due to political and geographical barriers around the world, COVID and the war-hit workforce, and working in a new standard setup have hindered the regular flow of supply chains. This situation demands the supply chain to be more agile and well connected as fast as possible. The rising demand squeezes the scope of any errors and needs more accurate demand forecasting to lower the loss rate. In recent years, most leading companies have started implementing AI/ML to leverage speedy decision making, accurate demand forecasting, better inventory management, speed in operations, dynamic logistic system, and delivery control. In a survey, Gartner said that the usability of AI and ML would double in the next five years in the supply chain. Again, as per another study, Global spending on IIoT Platforms is expected to rise from $1.67 billion in 2018 to $12.44 billion in 2024, representing a 40% compound annual growth rate (CAGR) over the next seven years. Considering the vastness and increasing complexity of the supply chain network, it’s the need of the hour to implement an automated system to manage the entire network better. This article will briefly discuss the change in the supply chain landscape after the pandemic, the implementation of AI/ML in the supply chain and their benefits and challenges, and the last use case of AI implemented in the Supply Chain. Aftermath Of Supply Chain Landscape Post COVID Pandemic COVID-19 pandemic has exacerbated the pre-existing challenges of the logistic and supply-chain industry and added a few more to that list. It changed the fundamental consumer behavior and demanded the adoption of agile ways of working.  In a survey by EY USA conducted in 2020, 97% of industrial companies revealed that the pandemic had a severe negative effect. Few sectors of the industries, e.g., the life sciences sector, did report a positive impact on their businesses during the pandemic. For instance, 11% said that their customer demand increased by 71%, and the rate of launching new products to the market increased by 57%. But the challenge is that these companies had to increase their essential product creation, e.g., COVID 19 vaccine, twice as much as before. So overall, the pandemic demanded more resource power, intense inventory management, and, more importantly, accurate predictive analysis of the market. Next Step for Supply Chains:  From the EY conducted survey, 60% of the executives conveyed that the pandemic catalyzed their strategic needs for the supply chain. The future supply chain demands agile, flexible, efficient, resilient, and digitally networked. The pandemic has pushed many sectors to the digital platform and workforce to work remotely. On the other hand, it made onsite resources mandatory to adopt the COVID-19 norms and work in the new normal. The survey showed an increase in automation and AI and machine learning investments, with 37% already deploying these technologies and 36% planning to do so soon. Moreover, digital and autonomous technologies will assist in making people’s jobs more accessible and the supply chain more efficient and optimized. How Can AI / ML Help ? Implementing AI/ML in the supply chain has numerous benefits. Some of the significant benefits are: Predictive Analysis: Demand Forecasting uses the power of automation to analyze all the data that the organization can collect, from demographics to price changes to consumer sentiment, and make sense of it against the sales history. Companies can use machine learning models to enjoy the perks of Predictive Analysis for demand forecasting. These patterns analyze the historical data to identify the patterns. So, in the supply chain, the models can be used to find the issues before any disruption is caused. A solid supply chain forecasting system means that the company has the resources and intelligence to respond to emerging issues and threats. Furthermore, the effectiveness of the response grows in direct proportion to how quickly the business can respond to problems. Optimized Inventory Management: An appropriate AI/ML model helps any company manage the over and understocking problem, thus improving inventory management. It can analyze the customer and market demand from the survey data and enable continuous improvement in a company’s efforts to meet the desired level of customer service at the lowest cost. You can also use AI and ML to analyze large data sets much faster and avoid human errors in a typical scenario. Avoid Forecast Errors: The AI/ML algorithm helps organizations deal with large data sets. The data processing is done with tremendous variety and variability. IoT devices, Intelligent Transportation Systems, and other powerful technologies enable the supply chain to gather massive data. The subsequent model helps companies have better insights and achieve more accurate forecasting, preventing enormous disruption or loss. According to a survey by McKinsey, AI and ML-based supply chains can reduce forecasting error by 50%. Improve Supply Chain Responsiveness: To minimize the cost of improved customer experience, most B2C companies are implementing AI/ML models. The AI/ML model induces automatic responses; AI chatbots help serve the customers promptly and thus handle demand-to-supply imbalances. The data analysis power of the ML model from the historical data helps the supply chain managers to understand the customer demands and also helps in better planning of vehicle routes and goods delivery. Thus it reduces the driving time and cost and enhances productivity. Challenges To Implementing AI/ML In Supply Chain: COVID-19 pandemic has exacerbated the pre-existing challenges of the logistic and supply-chain industry and added a few more to that list. It changed the fundamental consumer behavior and demanded the adoption of agile ways of working.  In a survey by EY USA conducted in 2020, 97% of industrial companies revealed that the pandemic had a severe negative effect. Few sectors of the industries, e.g., the life sciences sector, did report a positive impact on their businesses during

Anomaly-Detection-Paving-the-Way-for-A-Smarter-Business-1-scaled
Uncategorized

Anomaly Detection: Paving the Way for A ‘Smarter’ Business

A premium automobile manufacturer in the UK spent over $3 billion on product recalls due to malfunctioning braking systems, rearview cameras, airbags, and other minor issues. Another European luxury automobile maker’s previous three annual reports indicated it spent about nearly $7 billion recalling products for problems ranging from door locks to batteries, fuse boxes, fuel tanks, and wheel speed sensors. When automakers are focusing on developing self-driving and semi-autonomous vehicles, errors like these can hardly be overlooked. The entire network follows stringent quality control processes. Despite this, as the cases above demonstrate, lapses continue to occur. Anomaly detection can play a critical role across the industrial landscape in this context. Understanding Anomaly Detection Anomaly detection, described by Forbes as ‘one of the most underrated BI tools of 2020’, is an area of artificial intelligence that analyzes an organization’s data for deviations from normal behavior. Some inconsistent data points–known as outliers– may emerge, and detecting them will be critical to proactively averting scenarios like those described above. In some applications, the anomalies themselves are of interest, and the observations stemming from them could be the most significant in the entire dataset. Anomaly detection has applications in cyber-security, fraud detection, fault detection, system health monitoring, detection of ecosystem disturbances with computer vision, medical diagnosis, law enforcement, manufacturing industries, and more. Eliminating anomalous data can increase the accuracy of statistics such as the mean and standard deviation, improve data visualization, and enhance machine learning algorithms. Typically, various tools and techniques are used for anomaly detection. Anomaly Detection In Industry 4.0 Big data, machine learning, AI, and IoT devices are paving the foundation for a data-driven culture. IoT allows machines to exchange real-time data over the Internet. As part of the Internet of Things revolution, digital sensors and networking technologies are added to the analog devices we use every day, ushering in an era that we call Industry 4.0. According to some estimates, by 2025, there will be 64 billion IoT devices connected to the internet. This means that businesses will have to deal with a huge deluge of data. According to the Boston Consulting Group, IIoT is one of the nine principal technologies that make up Industry 4.0. By combining these technologies, a “smart factory” will be created where machines, systems, and humans can work in harmony, coordinating and monitoring progress along the assembly line. An important goal of Industry 4.0 will be predictive maintenance, which will be driven heavily by anomaly detection. As in traditional business applications and IT infrastructure, IoT devices can be monitored for errors. In industrial and manufacturing settings where IoT devices are used to facilitate modernization and automation, anomalies might indicate the need for maintenance on a machine. Identifying a potential problem early can help reduce unplanned downtime. A McKinsey insight report found that advanced analytics can predict machine failure before it occurs, reducing downtime by 30% to 50%. Additionally, it increases equipment life by approximately 40%, improving productivity in all areas. An inadequate maintenance program can reduce an equipment’s productivity capacity by up to 20% while unplanned or last-minute machine downtime costs industrial manufacturing organizations $50 billion a year. By leveraging IIoT data, manufacturers can gain meaningful insights into their businesses and schedule pre-emptive equipment structural health checks. Benefits Of Anomaly Detection In IIoT: Cost and Time Savings: By detecting anomalies early on, you can ward off potential losses and liabilities. Often, noise and outliers can produce false positives, and get in the way of early anomaly identification. By using configurable time frames and historical pattern analysis, you can enhance detection latency and accuracy. Deeper Insights: Detecting anomalies is only one step of a complex process that includes issue triage, root cause analysis, troubleshooting, and feedback-based system tuning. By engineering anomaly detection models from the ground up to provide advanced insights, you can investigate scores of issues like anomaly timeframes, severity scores, and correlated metrics. Advanced dashboards help visualize these insights. Proactive Intervention: After the early detection of anomalies, you can generate insightful summaries to help with the investigation and fine-tune alert thresholds and severity levels according to operational feedback. How Our IIoT Software Solution Helps With Anomaly Detection and Much More.. Valiance provides a proprietary IoT-based platform for remote monitoring and diagnosis of a power plant’s assets. Various sensors and scalable cloud software help operational teams look at asset health in real-time and give business stakeholders insight into plant efficiency and output. For instance, one of our clients, who was struggling with manual intervention, wanted to enable centralized data collection through sensors and reduce the manual work involved.  They also wanted to monitor their assets like turbines, and sensors such as thermometers and energy meters through certain dashboards. Valiance delivered an IIoT-enabled platform to the client, enabling staff to view real-time data and monitor asset performance. Certain KPIs and threshold levels were defined at the backend, based on which the anomaly could be detected (notified to the client) and necessary steps taken in a timely manner. In another instance, by enabling centralized data collection on near real-time basis, our platform allowed the customer to perform plant-wise and region-wise analysis on power generation targets, monitor the actual output, identify gaps, and coordinate timely intervention.  Earlier this had not been possible due to delays in data collection, data consolidation, and sharing. However, after installing the platform, the time lag reduced from two to three weeks to less than a day, resulting in annual manpower savings of nearly four months. Leverage Anomaly Detection To Improve Your Industrial Asset’s Performance Anomaly detection is poised to become one of the most exciting developments in the business world, adding even more ‘intelligence’ to the AI revolution. But how can you get started? Schedule a call today to find out.

Project planning software for modish business project management
Uncategorized

How Information Silos Are Impacting Supply Chains?

Supply chain planning is a vast field involving many departments and stakeholders using different technologies resulting in data silos. Traditional supply chain models store information in Excel spreadsheets. They maintain multiple closed ERP systems for various parts of supply planning in these sheets. Various stakeholders involved in supply planning have been using these siloed systems with little  collaboration and visibility to others leading to problems like Moreover, in modern supply chains, globalization increases data integration difficulties. New business models with outsourced manufacturing, acquisitions, and partnerships or joint developments with partners have blurred the divisional boundaries.Data plays an integral part in such a system, and silos put the all-over business at a greater risk. For instance, it’s harder to share demand forecasts and demand changes with the suppliers. On the other hand, to minimize silos, having periodic on-site meetings among the teams and working on changes for hours is not cost-effective. It’s inefficient, ineffective, and costly for any last-minute changes. Moreover, the COVID-19 pandemic effect is continuing to sputter and break down the global supply chain. There are still many disruptions like out-of-place shipping materials and data mismanagement. IDC polled 532 companies across the pharmaceutical supply chain to assess the impact of the COVID pandemic’s supply disruptions. This article will highlight the core reasons behind the information silos and their effect on Supply chain planning. When does a Silo happen? Let’s say a global company makes an important decision like entering a new market or expanding into one, and the silo happens while making decisions by individual verticals such as :  Silos happen when different verticals don’t share relevant data horizontally. The number of verticals and divisions within a company increases along with the Company’s growth. Going by an old saying, “Too many cooks spoil the broth,” the system becomes inconsistent due to poor data sharing and management systems between teams that lead to silos. Why does Information Silo Occur? There are three significant reasons that can result in information silos :  Effect of Information Silos on Supply Chain:  Modern supply chains involve multiple departments and stakeholders. The demand for quick decision-making is based on a vast amount of data. Often, a lack of communication between the departments can impact each other. United States’ food wastage is a consequence of such data silos. According to the NRDF, of the total food wastage, approximately 40% of it is contributed by the US alone, which is somewhere around $218 billion per year!  Wrapping Up: Data silos are a dilemma companies face across industries, but they can negatively impact the logistics and transportation industries. A data-driven approach would be an optimal solution to prevent Information silos. Once an organization acknowledges the lack of data governance, the next step is to recognize the necessary process, tools, and models to create an effective data governance method. This is the most critical step in implementing a data-driven procedure, and the rest will follow gradually. Even to implement an AI/ML solution to eliminate the Silo problem, having a data governance maturity model is a must. Removing supply chain bottlenecks and silos ensures that your customers’ needs are met, and it helps to make the right decisions. Overall, eliminating data silos can increase efficiency and boost your Company’s bottom line.

Bitcoin Cryptocurrency Digital Money Golden Coin - Technology an
Uncategorized

Blockchain : 5 Things you should know about it

Talk of blockchain technology is everywhere, it seems — but what is it, and what does it do? 1. Don’t call it “the” blockchain The first thing to know about the blockchain is, there isn’t one: there are many. Blockchains are distributed, tamper-proof public ledgers of transactions. The most well-known is the record of bitcoin transactions, but in addition to tracking cryptocurrencies, blockchains are being used to record loans, stock transfers, contracts, healthcare data and even votes. 2. Security, transparency: the network’s run by us There’s no central authority in a blockchain system: Participating computers exchange transactions for inclusion in the ledger they share over a peer-to-peer network. Each node in the chain keeps a copy of the ledger, and can trust others’ copies of it because of the way they are signed. Periodically, they wrap up the latest transactions in a new block of data to be added to the chain. Alongside the transaction data, each block contains a computational “hash” of itself and of the previous block in the chain. Hashes, or digests, are short digital representations of larger chunks of data. Modifying or faking a transaction in an earlier block would change its hash, requiring that the hashes embedded in it and all subsequent blocks be recalculated to hide the change. That would be extremely difficult to do before all the honest actors added new, legitimate transactions — which reference the previous hashes — to the end of the chain. 3. Big business is taking an interest in blockchain technology Blockchain technology was originally something talked about by anti-establishment figures seeking independence from central control, but it’s fast becoming part of the establishment: Companies such as IBM and Microsoft are selling it, and major banks and stock exchanges are buying. 4. No third party in between Because the computers making up a blockchain system contribute to the content of the ledger and guarantee its integrity, there is no need for a middleman or trusted third-party agency to maintain the database. That’s one of the things attracting banks and trading exchanges to the technology — but it’s also proving a stumbling block for bitcoin as traffic scales. The total computing power devoted to processing bitcoin is said to exceed that of the world’s fastest 500 supercomputers combined, but last month, the volume of bitcoin transactions was so great that the network was taking up to 30 minutes to confirm that some of them had been included in the ledger. On the other hand, it typically only takes a few seconds to confirm credit card transactions, which do rely on a central authority between payer and payee. 5. Programmable money One of the more interesting uses for blockchains is for storing a record not of what happened in the past, but of what should happen in the future. Organizations including the Ethereum Foundation are using blockchain technology to store and process “smart contracts,” executed by the network of computers participating in the blockchain on a pay-as-you-go basis. They can respond to transactions by gathering, storing or transmitting information or transferring whatever digital currency the blockchain deals in. The immutability of the contracts is guaranteed by the blockchain in which they are stored.

Big Data Technology for Business Finance Concept.
Uncategorized

Advanced Analytics : The 3 Biggest Trends Changing The Data Eco-system

You don’t need us to tell you that the data world – and everything it touches, which is, like, everything – is changing rapidly. These trends are driving the opportunities that will fuel your career adventure over these next few years. At the heart of these trends is a massive wave of data being generated and collected by organizations worldwide. With this data we can shift our focus as analysts from explaining the past to predicting the future. And in order to do this, we need to spend less time doing the same things over and over and more time doing brand new things. And accomplishing all these changes will require us to work together differently than we do now 1. Bigger, Larger and Faster Data You’ve probably already heard the fact that every two years we, as humans, are doubling the amount of data in the world. This literally exponential growth of data is impacting analysis in some big ways: 2. Predictive Analytics The vast majority of time spent by the vast majority of today’s analysts is on understanding data collected in the past, often in the form of reports and dashboards. Those days are coming to an end. The data and tools now available are allowing analysts to go beyond just convincing someone to do something and instead to often just do it themselves. For example: 3. Automation Of Tasks Once upon a time, analysts built a model in Excel, and once a month or so, they exported the model to PowerPoint and send it to (or even printed it out for) the managers who relied on regular reports. Soon, there were too many reports, so maybe they used macros in Excel to automate the creation of reports. Or maybe they were lucky enough to have a dashboard program that had some automation functionalities built in. The future promises even more than this:

AI Chatbot smart digital customer service application concept.
Uncategorized

The 8 Things a Chatbot needs to be called Intelligent

As the internet has grown so has folks’ tendency to chat instead of talk. Whether it’s booking an appointment at your local paediatrician or asking your boyfriend if he wants to go out for some pasta, people like texting. This is what chatbots want to capitalize on – People’s aversion to talking on the phone. Companies all over the world have embraced chatbots with open arms. They bring along with them a sense of humanity that no IVR can match. One of the reasons is the complete lack of any voice based interaction. While those beautifully sounding ladies on the IVR systems might be speaking correct English (or any other language(s)), they speak it with such robotic emotions that the only thing the person on the other end of the phone wants to do , is strangle themselves (or the IVR lady). A lack of direct human interaction allow chatbots to be much more than a computer program. Employing the help of mankind’s greatest asset – Imagination, chatbots can (almost) substitute for a real live human person on the other end. How to build an Intelligent Chatbot using AI Computer Scientists from all over the world are doing great work in making sure that chatbots can really feel like talking to another live human. Conversational Artificial Intelligence (the field of study that chatbots fall under) has seen tremendous progress in the last few years. Tech giants like Facebook, Google and Apple are constantly coming up with both highly valuable research papers as well as publishing user apps that at the same time is both showing off the work they have done as well as training their chatbots to become even better. However, this huge praise that Chatbots have come to receive has also muddled up the market. There is a huge different between a generic chatbot that can only choose between a set of responses and intelligent chatbot which try to make sense of the user’s chats and respond accordingly. If you are a business, you need the latter but chances are the agency you hired to develop you a chatbot is trying to fool you by giving a very cleverly disguised version of a generic chatbot. Here are 8 things that your intelligent chatbot MUST have : 1. Chatbot Carry an Intelligent Conversation A conversation is much more than saying yes or no. Moreover, the longer a conversation the more complex it gets. To carry an intelligent dialogue, the bot must be able to maintain the context of the conversation at all times. It also has to understand that natural conversations don’t always progress linearly – the bot must be able to process an unexpected reply and adapt to changes in the course of the conversation. 2. Build Contextual Engagement A smart chatbot has to understand who it is chatting with. In order to provide a truly personal experience, the chatbot has to know about the user’s interests, attributes and personal information – then tailor the conversation to fit them. The bot needs to provide content, advice, and offers that exactly fit the user. If all the information is generic, it will be shallow, unengaging, and in many cases, not very useful. 3. Leveraging Real-Time Transaction Data.  Connected with the need for contextual engagement, an intelligent chatbot must be able to access real-time insights on transactions. Without real-time data access and analytics, the power of artificial intelligence (AI) and contextual advice (either human-based or with chatbots) is limited. 4. Chatbot Can Reuse Existing Content To have a meaningful impact, it is crucial for the chatbot to be able to access content created and maintained in digital repositories across all channels. From digital ‘brochureware’ to FAQs, rules and regulations, and rate information, bots must be able to access and leverage this insight in real-time. 5. Build Deep Knowledge To build engagement, a chatbot needs to be able to provide advice, not just balances. Personetics believes bots need to be purpose-built – with deep knowledge on issues important to the customer. With PayPal supporting payments through Facebook Messenger, the bar transactions through the bot channel has been set and is being raised. 6. Work Seamlessly Across Channels  Customers expect a consistent experience across the digital landscape – online, mobile app, Facebook Messenger, Amazon’s Alexa, etc. A bot can not be a silo, but should be able to traverse across and between multiple channels. This may be a challenge for organizations who still can not achieve this within internal channels (mobile, branch, online, call center). 7. Get Smarter Over Time  An intelligent chatbot must get to know customers better through over time as more conversations and transactions take place. It must improve based on how a customer reacts to information and advice provided by the chatbot over time. 8. Anticipate Customer Needs Almost half of all chatbots are only used once. This happens when a bot experience does not meet exceed expectations. To get customers in the habit of conversing with your chatbot, it needs to proactively reach out to customers with information, insight, and advice – presented at the right time and place based on predictive analysis of individual customer needs.

Data-Quality-Challenges-1
Uncategorized

5 Data Quality Challenges in Data Science

In this era when Data Science and AI are evolving quickly. Critical business decisions are being taken and strategies being built on the output from such algorithms, ensuring their efficacy becomes extremely important. When the majority of the time of any data science project is spent in data preprocessing it becomes extremely important to have clean data to work upon. As the old saying goes ‘Garbage in, Garbage out’, the outcome of these models is highly dependent on the nature of data fed in, hence data quality challenges in data science are becoming increasingly important. Challenges to Data Quality in Data Science Let’s understand this problem better using a case. Let’s say you are working for an Indian bank who wants to build a customer acquisition model for one of their products using ML. As with typical ML models, they need lots and lots of data and as the size of data increases, your problems with data also increases. While doing data prep for your model you might face quality challenges. Let’s look at a few of them one by one. The top most Common causes of data quality issues are: Duplicate Data: Suppose you are creating customer demographics variables for your model and you notice that there are a cluster of customers in your dataset that have exactly the same age, gender and pincode address, well this case is quite possible as there can be a bunch of people of the same age, gender living in the same Pincode, but you need to have a closer inspection at the customer details table and check if rest of the details(like mobile no, education, income, etc.) of these customers are also same or not. If they all are the same, it means it is probably due to data duplication. Multiple copies of the same records not only take a toll on computing and storing but also affects the outcome of the machine learning models by creating a bias. Inaccurate Data: Suppose you are working on location specific data, it can be quite possible that the pincode column you fetched contains some values which are not of 6 digits. This problem occurs due to Inaccurate data and it can impact your model where data needs to get aggregated at pincode level. Features with a high proportion of incorrect data should be dropped altogether from your dataset. Missing Data: There can be data points which might not be available for your entire customer base. Suppose your Bank started to capture the salary of your customers in the last one year only, customers who are associated with the bank for more than one year will not have their salary details captured. However important you might think this variable can be for your model, if it is not available for more than 50% of your entire dataset, it cannot be used in its current form. Outliers: Machine learning algorithms are sensitive to the range and distribution of attribute values. Data outliers can spoil and mislead the training process resulting in longer training times, less accurate models and ultimately poorer results.Correct outlier treatment can be the difference between accurate and an average performing model.  Bias in Data: Bias error occurs when your training dataset does not reflect the realities of the environment in which a model will run. Let’s understand this in our case, typically in acquisition models the potential customers on which your model will run and predict in future can be of two types, Credit experienced or new to credit. If your training data contains only credit experienced customers, your data will be biased and will fail miserably in the production settings as all the features which capture customers performance using the credit history(Bureau data) will not be present for new to credit customers. Your model might perform very well on experienced customers but will fail for the new. ML models are as good as data they are trained on, if the training data has systematic bias your model will also produce biased results. How To Address Now that we understand the data quality challenges, now lets see how we can tackle them and improve our data quality. But before going further let’s first understand that it is certain that data will never be 100% perfect. There will always be inconsistencies through human error, machine error or through sheer complexity due to the growing volume of data. While developing ML models there are few techniques that we can use to address these issues like: Apart from these techniques we can also add some logical rule based checks to validate the data if it reflects the real value with the help of domain experts. Also there exists a lot of software solutions in the market to manage and improve data quality in data science and help you create better machine learning solutions. Final Words Dirty data is the single greatest threat to success with analytics and machine learning and can be the result of duplicate data, human error, and nonstandard formats, just to name a few factors. The quality demands of machine learning are steep, and bad data can backfire twice — first when training predictive models and second in the new data used by that model to inform future decisions. When 70% to 80% time of a data scientist in any ML project is spent in the data preparation phase then ensuring that high-quality data is being fed into ML algorithms should be of the highest importance. As by each passing day, more and more data is being generated and captured, addressing this challenge right now is more important than ever. 

Predictive-analytics_4-assumptions-business-leaders-have-scaled
Uncategorized

Predictive Analytics: 4 Assumptions Business Leaders Have

Business leaders and stakeholders often think about right time to start looking at analytics and sometimes fall shy due to concerns surrounding data availability, quality of data, lack of resources and value of the overall exercise. We have been asked quite a few questions ourselves in last couple of months by decision makers across Insurance industry. Frequent ones are quoted below with response Assumption 1: We just have few thousand records, i am not sure if this is enough for any kind of predictive analytics. That’s a valid observation, for any predictive model to be successful we need to build and validate it on sufficient dataset. Generally you can have a fairly good model for 1000 records and atleast 100 events. Example 100 lapse in 1000 observed customers. As a thumb rule in addition to above point for each variable used for prediction there should be at least 20 records.Ex if 10 variables are used for prediction, minimum no of records expected are 10*20 i.e. 200. This whole process can help you identify deficiencies in data collection process, like missing values, invalid data or some additional variable should have been collected. Such interventions at early stage can be very helpful and can go a long way in improving data quality. Assumption 2: Our data quality is too bad, I don’t think we can do it right now Addressing data quality is core to the process of modeling. Data once imported is processed to bring it into meaningful shape to proceed for any further analytics. Availability of high computing power at lesser cost makes sure any size of data is small nowadays and can be processed in lower time and cost. Assumption 3: I am not too sure on the Return On Analytics The real fruit of analytics is not just in the scorecards or numbers but also in the way it is integrated and implemented within organization. Having an list of customers in excel scored on basis of lapsation might not be much useful but if it’s real time and integrated across IT ecosystem of web or mobile giving your agents, Customer Service team insights into consumer behavior every time he interacts with your firm, it becomes much more actionable. Think about product affinity ratings for customer integrated with Tablet app agents carry these days. Not only your agent will be able to push right product to the customer based on his needs but importantly build a long term relationship. Assumption 4: I already have basic predictive modeling initiatives running but not very effective. What more can I do! Basic premise of any analytics initiative is framing the right question, having the right data at hand and finally a strong actionable strategy. Doing this right will definitely result in good show. Once you have considered looking at internal data sources, you can also try adding external data sources like CIBIL, Social Media and economic indicators like Inflation, Exchange rate etc to glean information about financial behavior, consumer life style and events. Frame hypothesis which you would want to validate against external data sources and test them.

Human Resources HR management Recruitment Employment Headhunting Concept
Uncategorized

Customer Segmentation: Data Science Perspective

Organizations around the world strive to achieve profitability in their business. To become more profitable, it is essential to satisfy the needs of customers. But, when variations exist between individual customers how they can effectively do that. The answer is- by recognizing these differences and differentiating the customers into different segments. But how do organizations segment their customers? And in this article we’ll help you understand this from a data science perspective. What is customer segmentation? Customer segmentation is the process of dividing the customer base into different segments where Each segment represents a group of customers who have common characteristics and similar interests. As explained above, the exercise of customer segmentation is done to better understand the needs of the customer and deliver targeted products/services/content. With time, all sorts of organizations from e-commerce to pharmaceutical to digital marketing have recognized the importance of customer segmentation and are using it improve customer profitability. Customer segmentation can be carried out on the basis of various traits. These include : How to perform customer segmentation? Start with – Identifying the problem statement One of the foremost steps is to identify the need for the segmentation exercise. The problem statement and the output expectation will guide the process of segmentation. Example: In both the cases, the intent or need to perform customer segmentation is different. This will further determine the approach taken to achieve desired outcome. Gathering data Next step is to have the right data for the analysis. Data can come from different sources- internal database of the company or surveys and other campaigns. Other third party platforms like Google, Facebook, Instagram have advanced analytics capabilities to allow capture of behavioral and psychographic data of customers. Creating the customer segments Once you have defined problem statement, and gathered all the required data for it, the next step is to carry out the segmentation exercise. Key steps here will be: Data science and statistical analysis with the help of machine learning tools help organizations deal with large customer databases and apply segmentation techniques. Clustering, a data science method, is a good fit for customer segmentation in most of the cases. Usage of the right clustering algorithm depends on which type of clustering you want. Many algorithms use similarity or distance measures between data points in the feature space in an effort to discover dense regions of observations. Some of the widely used machine learning clustering algorithms are : Segmentation backed by data science helps organisations to forge a deeper relation with their customers.  It helps them to take informed retention decisions, build new features, and strategically positioning their product in the market.

Reviewing financial reports in returning on investment analysis
Uncategorized

Analytics: No Pain, No Gain

“Analytics is a journey and not a destination!! It takes considerable effort to frame that journey and execute it with a sense of purpose. You will encounter stumbling blocks that may threaten your initiative but you need to find a way out and keep marching ahead.” How is it like to build a data analytics strategy? We did a data analytics exercise for a US client recently in education domain that had all the flavors of roadblocks one can encounter on venturing into analytics territory. I intend to summarize those here along with solutions we found in collaboration with all stakeholders Takeaways This was just a month’s exercise. Surely we will hit many such scenarios ahead.

Scroll to Top