Previous write into the a csv format #

Previous Work

Some of the work that has been done was on the basis of the mapping of the vms . They optimised with the help of the physical  and virtual machine mapping to avoid ant kind of latency.  This type of mapping was required in the case of the multi-Iaas problem. Several other work has been done on the basis of the QoS requirements.  Such type of the optimisation was used for the web application based programmes . It is quite possible that the customers have various choices for the different resouces like the ram , cpu cores , storage.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

 

System Requirements

We used openstack to deploy our cloud. The devstack script was used to create the automated tool settings. It got us through the setting of Nova,cinder,keystone ,neutron and other cloud apis on the single machine.Hardware – i7 Quad core (Octa threaded machine)

Operating machine – ubuntu 16.04 lts

Software –  Python 3.6, Spyder

Libraries – numpy, sklearn, xgboost, psutils

Storage – 1 TB

 

Experimental Work

It all started with the two main tasks

1) analysis

2) core commitment and  recommendation

 

1) Analysis Algorithm-    A VM instance is running physically at the location of the company . So  the latency is very low. The latency occurs during analyzing the VMs when the hardware is at 90% or more capacity. Until and unless the load is very high there is very low latency in this part and thats why we say it is  np complete

Such type of data was collected by  the virtual machcine .  A python scrit was used to collect this data and write into the  a csv format

 

 # – python scipt

 

import psutil

from datetime import date

import calendar

 

 

f=open(“set2.csv”,’w’)

f.write(“Day,”+”Cores”+”,” +”Total”+”,”+”CPU1″+”,”+”CPU2″+”,”+”CPU3″+”,”+”CPU4″+”,”+”Actual Required”+’
‘)

n=4

 

def actual_calculator(a):

    if(sum(a)<30):         return 1     elif (sum(a)<50 and sum(a)>30):

        return 2

    elif (sum(a)<70 and sum(a)>50):

        return 3

    else:

        return 4

   

while True:

    my_date = date.today()

    day=calendar.day_namemy_date.weekday()

    a = psutil.cpu_percent(interval=0.5,percpu=True)

    act=actual_calculator(a)

    f.write(day +’,’+str(n) +’,’+str(sum(a)) +’,’+str(a0)+’,’+str(a1)+’,’+str(a2)+’,’+str(a3)+’,’+str(act)+’
‘ )

   

 

Algorithm for  data analysis

 

a)      Read number of cores n in a vm

b)      Start writing the core usage in a csv format

c)      Push it to the remote user

d)      End after a week

  

   

 

This image depicts the prediction efficiency of the machine learning algorithm being used . The cross validation score is calculated on the basis of the naives bayes formula which can be changed according to the dataset

 

This is the confusion matrix for the result and this primary diagonals tell about the correct outputs and the off diagonals are the wrong predictions

So here the correct results are 74575+4434+438 =79447 this helps us to calculate the f score and p score for the algorithm.

 

Overall relationships of the cpu on the basis of the core usage.

 

 

Comparision of the actual and predicted values. Here we can  see that there is  very less difference in the results predictedd