Predictive analysis can be a powerful tool for forecasting commodity prices if it is used correctly. Different factors can affect prices at different times and over different timescales. There is no single best model, no ‘one-size-fits-all’. The best model varies over time. Yet with the right knowledge and judgment, it is possible to identify predictable price movements and functions. To do this, we ask the hard questions so that you don’t need to. Then we bring you the answers.
The first step towards a successful data analytics project is to clearly define the objectives of the project and formulating the right hypotheses. We work closely with our clients to understand their specific investment portfolio and goals, in order to achieve their targets. Our market research analysts constantly keep themselves updated on the latest news and trends on commodities such as iron ores and pass the data to our data analytics team who will then act upon those data.
Our market research team carefully pulls the right data from various secondary sources such as Bloomberg and other stock exchanges on various commodity trade information such as:
In addition, we combine these quantitative data with other qualitative data to make sense of the commodities industry before proceeding with more in-depth analysis.
In order for data analysis to be performed, data processing is necessary to ensure that raw data are being structured into the right format. By manipulating the raw data into the relevant rows and columns, it makes data analysis easier and manageable down the road – an important step not to be missed.
Data cleaning (also known as data wrangling) is one of the most important steps of every data analytics project. At Tivlon Technologies, we adopt the following data analytics approach towards our data cleaning process:
Exploratory data analysis (EDA) helps us to determine the relationships among the explanatory variables, assess the skewness, direction, and strength of relationships between explanatory and outcome variables. We leverage on various tools and software to aid us in our EDA process, which includes Microsoft Excel, R, PowerBI, and RapidMiner.
Model development is the process where begin to develop our predictive models using the various machine learning techniques such as Decision Trees, Support Vector Machines, and Neural Networks etc. We adopt a traditional process where we split our cleaned data into training and test sets (typically in a ratio of 75%:25% respectively). Once that is done, we began training our model using the training set and deriving the best model by testing it with our test sets to determine the accuracy and precision.
As seen in our Predictive Modelling process, the model refinement process is one that is highly iterative as we strive to achieve the best model that is the most robust and accurate. We often review the performance of our model through the use of Confusion Matrices to determine the number of True Positives and False Negatives that our model is capable of achieving.
Finally, we present the insights and prediction results in the form of easy-to-understand visualization in the form of presentation and reports. We also offer our clients actionable recommendations to aid them in making more informed investment decisions, and ultimately achieving a good portfolio and track record for their trading history.
Big data differs from conventional data in at least one of the following ways: volume, velocity, and variety. Volume is ‘how much’. Velocity is ‘how fast it arrives’. Variety is ‘how many different types of data are there’. Using more data to produce more accurate forecasts, we apply advanced analytics to gain insights into the different types (numerical, graphical, text) of data for you.