Trading the news: A sentiment analysis strategy using

blog photo
Published by Joanne Snel on November 3, 2021

News-following investment strategies often require you to closely follow the news. What if you could automate the process? In this project we’ll build a sentiment analysis strategy that autonomously trades based on news headlines. We show you how to scrape headlines from a financial website, determine the sentiment of said headlines and make trade decisions based on your findings. With the Python requests library, Beautiful Soup, VADER and in your toolbox, you’ll have everything you need to bring this project to fruition. We’re super excited to show you just how, the brokerage API powering automated trading, is the perfect tool for a project like this. 

If you want to get started developing straight away, you can check out our GitHub repository for this project here. Otherwise, keep reading to learn more about the strategy. 

Why would I trade the news? 📰

You’ve probably heard the maxim, ‘buy the rumour, sell the news’, referring to the phenomenon that traders speculate on news (and thus, also price movements) and subsequently exit out of their positions once the news has been published. If this really is the case, wouldn’t trading the news be too late? Turns out, day-to-day fluctuations in price movements to some extent reflect emotional reactions to the news, see this article. Combine this with an automated script, and you’re capitalising on the reactions of others to the news, without having to closely follow it for yourself. 

Are you convinced? Let’s see how we can make the news work for us. We’ll begin by collecting news headlines.

Collecting your data 📊

As this process will be different for each data source, we won’t go into too much detail here. If you need some more help with web scraping, check out this article

For this project, our goal is to place trade automatically based on the news. The first step is to decide how we want to gather our data, and especially from which source. We went for MarketWatch because the data is presented in an easily digestible format — for each headline, we are given its date and the ticker(s) corresponding to the headline, see the example below. Using the headlines, we can play a game of sentiment red light, green light and hopefully cash in a few won (🦑 🎲). But more on that in a second.

Screenshot of technology headlines collected on 19 October 2021

To collect these headlines, we use a simple GET request against the desired URL. Using the requests package, this looks as follows:

1import requests
2page = requests.get("")

And to parse this data, we use BeautifulSoup, which is a Python package that can extract data from HTML documents.

1import pandas as pd
2from bs4 import BeautifulSoup
3soup = BeautifulSoup(page.content, "html.parser")
4article_contents = soup.find_all("div", class_="article__content")
5for article in article_contents:
6    headline = article.find("a", class_="link").text.strip()
7    ticker = article.find("span", class_="ticker__symbol")
8    headline_ticker = [headline, ticker]
9    headlines.append(headline_ticker)
11columns = ["headline", "US_ticker"]
12headlines_df = pd.DataFrame(headlines, columns=columns)

Keep in mind, this code won’t work for just any website. You’ll notice that we are accessing the article contents in something called “div” and “article__content”. You’ll need to adjust this on a website-by-website basis, and this requires some inspection of the page you are on. In Chrome, you can do this by right-clicking on any website and selecting ‘Inspect’ (if you use another browser, use these steps instead). You’ll be met with a jumble of HTML. The easiest way to figure out where the headlines are ‘hiding’, is to Ctrl-F (or Command-F on iOS) a particular headline. You can also click the ‘Select an element in the page to inspect it’ button on the top-left of the Developer console to pinpoint where to find your desired data.

Clicking on a particular element will reveal where it in the HTML code. For example, when we click on a ticker we’re informed it can be found in the <span> tag and in the ‘ticker__symbol’ class.

Once you’ve found the tags corresponding to the right element(s), you can paste the names into the code snippet above to retrieve its contents. We suggest frequently printing your output to determine whether you are collecting the desired information and whether it needs to be pre-processed. For example, in line 8, we remove the whitespace surrounding the headline to clean up our data. When you’re happy with your output, you can collect all relevant information in a Pandas DataFrame. If you’re interested in other Python resources that might be useful for automated trading, check out our article here.

Pre-processing your data 👩‍🏭

At this stage, it’s likely your data needs some (additional) pre-processing before it’s ready for sentiment analysis and trading. Perhaps you’re also collecting the headline’s timestamp and need to convert it to a different time-format, or you might want to remove any headlines that don’t mention a ticker.

Luckily, in our case, we don’t have to do a lot of pre-processing. In our GitHub repository, you’ll notice that we removed any headlines without tickers and headlines with tickers that we know are not tradable on (to make the dataset smaller). To do this, we created a list of non-tradable tickers and constructed a new DataFrame of the collected headlines, filtered by the negation of the above list. Additionally, to trade on, we need to obtain the instrument’s ISIN. Because we trade on a German exchange, querying for a US ticker will not (always) result in the correct instrument. Therefore, to ensure that there are no compatibility issues, we suggest mapping a ticker to its ISIN before trading. We’ve published an article that’ll help you do just that.

Performing the sentiment analysis 😃😢

Once we’ve collected our headlines and tickers (or ISINs), we need to be able to decide whether the headlines report positive or negative news. This is where our sentiment analysis tool, VADER, comes in. It’s a model for lexical scoring based on polarity (positive/negative) and intensity of emotion. The compound score indicates whether a text is positive (>0), neutral (0), or negative (<0). In the above headlines, it can determine that ‘“Squid Game” is worth nearly $900 million to Netflix’ has a somewhat positive sentiment as the word ‘worth’ is likely part of the positive sentiment lexicon.🚦 If you’d like to read more about how VADER works, check out this article. There’s also alternatives out there, like TextBlob or Flair. You might want to try out all three to determine which one works best on your dataset.

For our use-case (determining sentiment scores of online newspaper headlines), the implementation is really simple:

1from nltk.sentiment.vader import SentimentIntensityAnalyzer
2vader = SentimentIntensityAnalyzer()
3scores = []
4for headline in headlines_df.loc[:,"headline"]:
5    score = vader.polarity_scores(headline).get("compound")
6    scores.append(score)
8headlines_df.loc[:,"score"] = scores

If we have more than one headline (and scores) for a particular ticker, we have to aggregate them into a single score:

1headlines_df = headlines_df.groupby("ticker").mean()
2headlines_df.reset_index(level=0, inplace=True)

We’ve chosen to combine scores by taking the simple average, but there are several measures that you might opt to use. For example, a time-weighted average to penalise older deadlines as they probably are less representative of current (or future) market movements.

Placing your trades 📈

Once you’ve obtained the compound scores for the tickers, it’s time to place trades. However, you first need to decide on a trading strategy — what kind of score justifies a buy order? What about a sell order? And how much are you trading? There are several components to keep in mind here, such as your total balance, your current portfolio, the ‘trust’ you have in your strategy and others. However these parameters will vary depending on your goals for this strategy.

Our base project works with a very simple trade rule: buy any instrument with a score above 0.5 and sell any instrument with a score below -0.5 (see if you can come up with something a bit more complex 😉):

1buy = []
2sell = []
3for index, row in headlines_df.iterrows():
4    if row['score'] > 0.5 and row['isin'] != 'No ISIN found':
5        buy.append(row['isin'])
6    if row['score'] < -0.5 and row['isin'] != 'No ISIN found':
7        sell.append(row['isin'])

If the instrument is tradable (ISIN found on and the sentiment score corresponds to our buy/sell decision, add ISIN to list of instruments we wish to trade.

We can then feed this list of ISINs to the API (if you’re not signed up yet, do that here) to place and activate our trades:

1orders = []
2# place buy orders
3for isin in buy:
4    side = 'buy'
5    order =
6        f"",
7        data={"isin": isin,
8              "expires_at": "p0d",
9              "side": side,
10              "quantity": 1,
11              "venue": "XMUN", 
12              "space_id": YOUR-SPACE-ID},
13        headers={"Authorization": f"Bearer {<YOUR-API-KEY>}"}).json()
14    orders.append(order)
15# place sell orders
16for isin in sell:
17    side = 'sell'
18    order =
19        f"",
20        data={"isin": isin,
21              "expires_at": "p0d",
22              "side": side,
23              "quantity": 1,
24              "venue": "XMUN", 
25              "space_id": YOUR-SPACE-ID},
26        headers={"Authorization": f"Bearer {<YOUR-API-KEY>}"}).json()
27    orders.append(order)
28# activate orders
29for order in orders:
30    order_id = order['results'].get('id')
33        f"{order_id}/activate/", 
34        headers={"Authorization": f"Bearer {<YOUR-API-KEY>}"})
35    print(f'Activated {order["results"].get("isin")}')

You’ll need to fill in your own space ID and API-KEY in this code snippet to make it run. Please note that you’ll need to make sure you’re not selling any financial instruments you don’t own, we’ve omitted it in this code snippet, but you can find the implementation in our GitHub repository!

For demonstration purposes, our trades are all of size 1, but depending on your capital, you might want to increase this parameter (or even make it dynamic depending on the sentiment score). Besides this, there are lots of other ways you can make this project even more extensive! We are excited to see your ideas 😏

Further extensions 🤓

This project is only a start to your very own sentiment trading strategy. There are several extensions that can be made, for example, you can make your trade decisions more robust by collecting news from several sources. Or you can conduct more extensive sentiment analysis by, for example, applying VADER on the whole article rather than just the headline (we all know clickbait is a real thing 🎣). Perhaps you want to use a different sentiment analysis tool, like TextBlob. Or maybe you even want to create your own sentiment score library based on investment-specific jargon.

We suggest you begin by collecting data from a news source you trust and tweaking the trading decision rule. Let your imagination go wild!

You’re now set to use BeautifulSoup, VADER and in your sentiment analysis project. See our GitHub repository for the entire script. And, if you come up with an interesting extension, feel free to make a PR! We look forward to seeing your ideas.

Joanne from🍋

You might also be interested in

Understanding the Trading API

blog photo was born out of the lack of European brokerages with a stable, reliable API. We designed our Trading API with the developer in mind, which means that its infrastructure is optimised such that you can use it to build almost any brokerage product you can imagine. Keep reading to learn more about its endpoints.

Trading terminology you need to know as a beginner

blog photo

The trading world is one known for its jargon - what’s the difference between a ‘bear’ and a ‘bull’ market, anyways? And why are financial instruments listed with two prices? If your trading journey is just getting started or you need a quick recap, keep reading to brush up on beginner trading terms. You’ll be an expert in no time!

10+ online resources for high-quality stock market news

blog photo

Staying informed of market development is crucial when it comes to making informed investment decisions. Whether you like listening to podcasts, reading news articles or interacting with like-minded individuals in an investing community, we’ve got a recommendation for you. Keep reading to find your match.

Dive Deeper

Find more resources to get started easily

Check out our documentation to find out more about our API structure, different endpoints and specific use cases.


Join community

Join our Slack channel to actively participate in our community, ask questions to other users and stay up to date at all times.


Interested in building with us?

We are always looking for great additions to our team that help us build a brokerage infrastructure for the 21st century.

For Developers
© 2021Privacy PolicyImprint
All systems normal