Skip to content

2

As a person who works as a Business Development Expert in an Agile environment, listening to the presentation about the challenges of Business Analytics was quite attractive. Before starting the blog post, let me start with the definition, what is Business Analytics?

Business Analytics is the art and science of discovering new insights for your business by applying statistical and artificial methods. It can help the company leaders in decision-making and problem solving, providing customer-based campaigns, segmentation and so many more. However, as the advantages of Business Analytics increasing, it becomes more challenging.

In the real world, generated data is dirty, incomplete, incompatible. If one manages to clean it and make an output out of it, that unused data will come as valuable as gold. If we get into the business analytics lifecycle, the first and important phase is reliable data collection. If the company is using data that is not reliable or not related to the issue, no matter how effective you apply machine learning algorithms - the output will be not useful. The next phase is data processing - converting it into needed files, cleaning unnecessary details, and make it stable. The following phase is about data mining and model development. It is considered as the main stage as the quality of applied algorithms affects the quality of output. The next phase is extracting insights of which quality fully depends on the previous stages. In this stage, the plan is developed, findings are reported, the final refinements are done. In the end, the developed model is implemented in a real-world environment.

Despite the fact that the process seems very easygoing, the business data presents challenges at this moment. The first challenge - poor quality data. As stated in one of my favorite quotes - garbage in, garbage out - data highly affects the output of all the development. In other words, even if the most effective machine learning algorithm is applied to the garbage, the result still will be garbage no matter what. The next challenge is the visual representation of data. The case is if you have brilliant data and model applied, in case you can not present and visualize it graphically, it can not be sold and get its value. Another important challenge is collecting real-time and meaningful data. Outdated data can have significant negative impacts on decision-making. With real-time reports and alerts, decision-makers can be confident they are basing any choices on complete and accurate information.

As stated clearly - data is the new oil 🙂 If you have the ability and analytical skills to process it, it will become as valuable as oil.

2

First of all, what is sentiment analysis? Sentiment analysis is an NLP (natural language processing) tool used to identify whether the data is positive, negative, or neutral. It is usually used by organizations to categorize the feedback towards their brand and to provide appropriate campaigns based on the results.

In our master thesis project, 2 types of data will be used for sentiment analysis. The first one is Azerbaijani news articles randomly gotten from different local web pages. The main goal here is to identify the sentiment value of the news. The data is separated into 2 sections: train data and test data. Train data will be labeled as positive, negative, neutral based on the polarity of the news article. Afterwards, test data will be used to evaluate the accuracy of the program.

The second data we will use in our project is a lexicon dictionary. We will be manually translating the words and assign a sentiment value to them. After having a sufficient level of words in the dictionary, the program will be able to evaluate the whole given text. Of course, the number of words in the dictionary will have an effect on the accuracy of the result. The more words, the higher accuracy will be achieved. The main goal here is that there exist lots of dictionaries in other languages, but for Azerbaijani, there is a lack of verified sources. Therefore, building a reliable dictionary in Azerbaijani and serve the program to local organizations will have a dramatic influence on the progress of the brands.

NLP formed the basis of modern software that recognizes and interprets human language. Social networks are a crucial part of this evolution as we everyday produce terabytes of data unintentionally only by utilizing modern social platforms. Natural Language Processing is the developing branch of Artificial Intelligence which aims to automate interactions between computers and humans using the structure of natural language.

Even before coming to Washington, DC I always believed I will be a great experience and bring valuable knowledge to each of us. Coming to ADA University Foundation and taking lectures from GWU professors already proved that it is gonna be full of diverse perspectives and learnings.

One thing I have learned in the last 2 lectures is George Heilmeier Catechism which helps agencies think through and evaluate proposed research programs. Looking at the questions, indeed, the person realizes the unseen sides the project can bring such as what are the limits, what are the risks, what are your objectives, and is it revolutionary even. The question I liked the most among all is "Who cares?", by this, you can identify what is the auditorium you are focusing on.

The next thing I learned from the lectures is differentiating between Computer Vision and NLP. I have heard some image detection questions but did not have exact information which area they are focused on separately. Now I clearly understand Computer Vision focuses on when and where the image was taken, what and who is in the image. On the other side, NLP focuses on whether the articles are of the same topic, who is the main role of the article , which area it is referring to and etc.

I always appreciate when the lecturer not only gives information about only the lecture itself but also shares diverse data based on personal experience, from real-world events to broaden our outlook. Hopefully, these lectures are that kind of type, we will have a great improvement on outlook here.

Enjoy

My name is Leyla E. Aliyeva and I was born in Baku, Azerbaijan living there till now.

I am currently a student of Master in Science of Computer Science and Data Analytics program held by ADA and George Washington University. At the same time, I am working remotely in Kapital Bank (Baku, Azerbaijan) as a business development expert (through Agile methodologies) in ATMs, terminals projects.

Before my master's degree, I was studying for my bachelor's in ADA University in Computer Science program. Through those years, I was engaged in Front-end development and worked as a front-end. After graduation, I realized that analytics is more attractive to me and switched my position at work also.

Within Computer Science, I am interested in Sentimental Analysis about guessing the text's positivity status using Natural Language Processing, Machine Learning.

As I love my job a lot, in my free time, I think about generating ideas on marked-based campaigns for promoting our products, and how to have privileges over competitors throughout the country. Additionally, I love taking pictures with beautiful aesthetics focusing on the details.

This is my first blog post so hope it will be a good blog path,

Sincerely

1

Welcome to your brand new blog at GW Blogs.

To get started, simply log in, edit or delete this post and check out all the other options available to you.

For assistance, visit our comprehensive support site, check out our Edublogs User Guide guide or stop by The Edublogs Forums to chat with other edubloggers.

You can also subscribe to our brilliant free publication, The Edublogger, which is jammed with helpful tips, ideas and more.