Introduction
Core values are the fundamental beliefs of a person or organization. They act as a guiding principle that help people right or wrong. While walking through a commercial street, have you ever wondered about an app that will notify you of discounts in your preferred retailer, or a product, that align with your values? This is one of the featured functions of the new Seratio AI Bot. On the other hand, it could also help the retailer identify the customers who align with their values and incentivize.
AI Bot
The main objective of the AI bot to help its users make informed decisions based on their values. By building predictive models and using AI algorithms, AI bot will be able to learn user’s preferences and behaviour. This will help the bot identify their next course of action and provide suggestions.
AI Bot receives the required data from multiple core segments of the seratio ecosystem. They are as follows
- SAPI
Seratio API is the software that implements impact assessment & tracking (non-financial value analysis). It is responsible for S/E scores calculation, S/E certificate awards, and has a build-in S/E translator enabling engagement and use of all other non-financial metrics.
From a corporate point of view it can process and combine different data sets, including monitoring product provenance, modern slavery conditions checks, Proof-of-[…] metrics, and convert them all into a single-number non-financial attribute – S/E score.
2. Seratio Platform Wallet
Anonymity, the fundamental idea behind blockchain. Although it provides high security, it breaks trust in blockchain. CCEG’s solution to the problem is to maintain the anonymity while bringing in non-financial intangible attribution (social Value) of each transaction. the fundamental functionality of the platform is to add an extra layer of value to every transaction
Here every user has the option to enter their basic, financial and social information, that helps in generating the S/E (Social Earnings Ratio) certificate through SAPI.
The details which are used to calculate the SE Ratio of an individual include:
• Country
• Number of people dependent on the individual
• Value of asset
• Environmental considerate decisions
• Amount spent to help others
• Raised/Helped Raising Amount to help others
• Number of people positively influenced by the individual
In the wallet we can set the SER preferences to do/restrict transactions or interactions with organisations, products, projects and people. Which means, the individual can initiate a transaction only if they satisfy the SE Ratio cut-off threshold mentioned by the Individual.

3. Provenance
The Provenance Monitoring is an S/E based complex analytical tool which allows tracking provenance of products, companies, services etc. In terms of product provenance, the platform will track both financial as well as non-financial attributes at every stage of the supply chain. This information is then submitted to blockchain through smart contract. Here it integrates with seratio blockchain platform. When the product moves from one part of the supply chain to another, the information is carried forward and verified with the corresponding information in blockchain. This creates a secure, shared record of exchange for each product along with specific product information.
When the product is complete, the collective information is processed by SAPI and will be able to provide it aggregate social value along with its other tangible information.
4. Rewarding Body
Rewarding body functionality supports organisation to create social value by earning tokens through impact that users make through their personal beliefs or social activity. When paid staffs are rewarded with normal currencies, the social currencies can be distributed to its unpaid supporters/helpers. May it be a reward for referral made by valued customers or for the volunteering for a social cause, rewards can be given to thank the other. Users are rewarded with specified tokens for their social activities based on the rewarding policy set by rewarding body. The Rewarding body will then transfer the tokens to the user directly using user’s QR code.
5. Microshare exchange and Retail
The MICROSHARE is a unit of non-financial, intangible value gained by people through many activities their personal beliefs or social activity (volunteering, charity, carers, etc). The first batch of the microshares are the SER microshares, which was distributed to all who contributed to the ICO. The other altcoins will receive their microshares as well. At this point, microshares will be tradable for another microshare at the microshare exchange.
Soon users will be able to acquire microshares from rewarding body and will be able to buy discount vouchers from retailers. These retailers will empower the community and their values, that they believe in, through discount policies. The discount policy will contain
• Minimum SER value needed to avail discount
• Type of microshare
• Amount of microshare
• Discount offered
Users would be able to purchase the discount in advance and produce it at the retail shop or pay at the counter.
Key outcomes for regular users:

The AI Bot can give suggestions or predictions based on individual’s SE Ratio preferences.
Depending on the user’s values and behavioural patterns, some of the model prediction scenarios are as follows:
• If an individual wants to buy a product, AI Bot can suggest a list of companies, which are providing that product based on the SE score preferred by the individual.
• If an individual wants to improve his SER value, AI Bot can suggest a list of NGOs or Rewarding policies based on their values.
• AI Bot can suggest the available offers and discounts. These offers can be availed using Microshares.
• It should also suggest ways to earn Microshares by integrating itself with rewarding body features in Seratio platform. This should also depend on proximity and preferences of the Customer.
• Different purchase patterns can be generated by the AI Bot, based on previous purchase history details and can suggest the possible future buying nature and other related activities by the individual.
Key outcomes for Organisation/Company:
Similarly, the AI Bot can give suggestions or predictions based on an organization’s preferences and values. Some of the key suggestions could be seen below.

• If a company wants to launch a product and they want to sell it at a targeted volume of customers, then our AI Bot can suggest the SER value that company should hold to reach the targeted volume of customers.
• Also, if a company want to improve their SER value, our AI Bot can suggest the areas where that company’s activities can be rectified or corrected to switch over from a low SER to a preferable SER value.
For example: Our Bot can suggest improving the wages, if currently one of the reason for that company’s low SER value falls in Modern Slavery category.
• For a manufacturing company, looking to improve their SER value, our AI Bot can suggest the list of suppliers whose SER value is higher with whom the company should associate and a list of suppliers with whom the company should not associate at any stages of product manufacturing.
• To improve their CSR activities, the Bot would be able to provide a list of NGO’s or rewarding body and provide insights into their current and historic social activity. The companies could then either collaborate with these organizations or they could conduct their own activities.
Building Predictive Models – Our Approach
Predictive models predict the future behaviour of a user. Predictive modelling is a group of statistical algorithms, which when applied on provided data, outputs a mathematical function/equation/a logical program that helps to predict the outcomes. To understand our approach, a sample steps taken are provided below
Stages of building a Predictive Model:

A. Data Cleaning:
This stage includes the process of detecting, correcting or removing the inaccurate records from the given data source. This is important to maintain the quality of data.
This stage includes:
• Various kind of data importing scenarios like importing various kind of datasets (.csv, .txt), different kind of delimiters (comma, tab, pipe), and different methods (read_csv, read_table)
• Getting the very basic information, such as dimensions, column names, and statistics summary
• Getting basic data cleaning done by removing NAs and blank spaces, imputing values to missing data points, changing a variable type etc.
• Creating dummy variables in various scenarios to help modelling
• Generating simple plots like scatter plots, bar charts, histograms, box plots etc.
B. Data Wrangling:
This is the process of transforming and mapping data from ‘raw data’ form into a desired format using merging, grouping, concatenating etc. for better decision making and analytics.
C. Explore the data with Python
In this step the data which is saved in database from the given datasource, is loaded to Python for further processing. At this stage the data is converted to dataframe, another data structure using a Python package named Pandas.
The following is an illustrative representation of a dataframe with details of a user for the calculation of SE Score.

After this, we will access the required columns from the dataframe by removing the unwanted ones.
D. Creating and training a model
This is the step where the Model is created and trained. For prediction, first we have to find a Function/Model that best describes the dependency between the variables in our dataset, known as Correlation.
We use Linear Regression Algorithm to create the model if the output to be predicted is a continuous variable. Whereas we use Logistic Regression Algorithm if the output variable is a binary or categorical variable.
After this, the dataset is split into training and testing datasets for the purpose of Training the Model.
• The training dataset is the one on which the model is built. This is the one on which the calculations are performed and the model equations and parameters are created.
• The testing dataset is used to check the accuracy of the model. The model equations and parameters are used to calculate the output based on the inputs from the testing datasets. These outputs are used to compare the model efficiency.
Training the model means fitting the created model to the training set of data. At this stage, we have created a model which is trained with the training dataset and is ready to handle to test dataset.
E. Prediction
This is the stage where the prediction process happens. The trained predictive model will be ready to give the result/prediction using the test dataset we created in the previous step. The default function predict() in Python is used to predict the result we expect.
In this stage the error between our test predictions and the actual values are calculated for checking the efficiency of our created model.

Explaining the approach using an example:
We are considering a scenario where we want to predict the number of sales a retailer will have on a future date depending upon certain previous factors such as offer/discounts on particular category of products provided for customers who will buy using Microshares.
A. Data Cleaning
The provided data is prepared by removing/updating the inaccurate entries to maintain the quality of data.
B. Data Wrangling
In this stage, we will merge/group/concatenate the data if needed for our requirement when analysing our data source.
• Create a Database in MongoDB
• Create a collection named sales_data
• Save the sample data to sales_data
C. Explore the data with Python
• Load data from database to Python
• Import data source and convert to pandas dataframe
• Get the required columns from dataframe by avoiding the unwanted ones.
D. Creating and training a model
We use Linear Regression Algorithm to create the model as described in the scenario. For creating the model, we will follow the below steps:
• Store a variable SalesCount, we will be predicting value of
• Generate the training set, train
• Select anything not in the training set and put it in the testing set, test
• Print the shapes of both sets
• Initialise the model class, linear_model
• Fit the model to the training data
Now we have trained a Linear Regression Model named linear_model.
E. Prediction
At this stage, we can use the trained linear_model to predict the required output/results using the test set test.
• Generate our predictions, linear_predictions for the test set
• Compute error between our test predictions, linear_predictions and the actual values, test
Now, in this stage, we have created a model named linear_model with Python which can predict the Sales Count of a retailer in a future day, by analysing various factors. For example, if there is an offer/discount on a particular ‘Category’ (Clothing, Sports Accessories etc.) on that day or whether the customer will be using ‘Microshares’ to buy and so on.
Conclusion:
This Predictive Model can be used by our AI Bot to help the retailers predict the future Sales Count depending upon the various factors mentioned. Likewise, by building different prediction models in our AI Bot, the main objectives like helping users make informed decisions based on their values can be fulfilled.