Monthly Archives: June 2014

HearthSim — Direct damage spells

Let’s take a look at the effect of another Hearthstone card mechanic: direct damage spells.

We are talking about cards like Holy Smite and Fireball, the spells that deal a set amount of damage to a minion or a hero. These cards are quite popular in many decks as they provide great utility and flexibility, though some players shy away from them because they think that the spells are too much of tempo disruption.

409

To model spells like this, we will have to modify the “Hand Score” term of the AI score functio. Recall that the hand score is give by
$$S_{\rm c} = w_m \sum_{k} m_{k}$$
where \(m_k\) is the mana cost of card \(k\). In the case of a direct damage spell though, it is not clear if the player should use it at the first opportunity or if the player should hold it and use it to get maximum value out of it. It is a matter of balancing the opportunity cost of not using the spell right away versus holding it for value. So, we modify the hand score to
$$S_{\rm c} = w_m \sum_{\rm minions} m_{k} + w_{\rm dds} \sum_{\rm dds}( a_{l} + c_{\rm dds}) $$
where the subscript “dds” stands for “direct damage spell,” \(a_{l}\) is the attack value (damage amount) of the spell, and where we introduce two new parameters, \(w_{\rm dds}\) and \(c_{\rm dds}\).

The new parameters \(w_{\rm dds}\) and \(c_{\rm dds}\), like most other model parameters, control how aggressive or conservative the AI’s spell usage is. In this case, the higher the weights are, the less aggressive the AI is when it comes to using the spell. This is because the hand score reflects how valuable the spell card is to have in your hand.

Naturally, there is an optimal balance between aggressiveness and conservativeness. To see this, let’s do an experiment using our standard setup:

  • The simulator modeling here
  • Both players have no hero ability
  • Player 0 and player 1 decks consisting of random sets of minions with attack value between 1 and 4, health value between 1 and 3, and mana cost equal to attack plus health divided by 2.
  • Player 0 goes first, gets 3 cards.
  • Player 1 goes second, gets 4 cards.
  • Both players use the optimized AI, with \(w_{\rm a} = w_{\rm h} =0.9\) and \(\tilde{w}_{\rm a} = \tilde{w}_{\rm h} = 1\)

This time, we will see what happens to Player 0 when we give it different number of Holy Smite spell cards and tune the AI’s \(w_{\rm dds}\) parameter. We set the value of \(c_{\rm dds} = 0.9\), though it turns out that the results are not too sensitive to this number.

And… here are the results. All simulations are based on 40,000 games played.

  \(w_{\rm dds} = 0.5\) \(w_{\rm dds} = 1.0\) \(w_{\rm dds} = 2.0\)
# of HS P0 Win % CP 95% Int. P0 Win % CP 95% Int. P0 Win % CP 95% Int.
0 42.5% 42.0%–43.0% 42.5% 42.0%–43.0% 42.5% 42.0%–43.0%
1 41.1% 40.6%–41.6% 44.6% 44.1%–45.1% 42.0% 41.5%–42.5%
2 39.9% 39.4%–40.4% 46.5% 46.1%–47.0% 40.6% 40.1%–41.0%
4 36.8% 36.3%–37.2% 48.7% 48.2%–49.2% 36.6% 36.1%–37.0%
6 32.0% 31.5%–32.4% 49.0% 48.5%–49.5% 28.5% 28.0%–28.9%
8 27.1% 26.6%–27.5% 47.2% 46.7%–47.7% 19.9% 19.5%–20.2%
12 16.7% 16.3%–17.0% 40.2% 39.7%–40.6% 6.1% 5.9%–6.4%
16 7.8% 7.6%–8.1% 28.2% 27.8%–28.6% 1.3% 1.2%–1.4%
20 2.5% 2.4%–2.7% 13.8% 13.4%–14.1% 0.19% 0.15%–0.24%
24 0.4% 0.3%–0.5% 2.9% 2.8%–3.1% 0.0% 0.0%–0.0%
28 0.0% 0.0%–0.0% 0.1% 0.1%–0.1% 0.0% 0.0%–0.0%

It’s probably much easier understanding the results visually. Below is a plot of Player 0’s win rate versus the number of Holy Smites it has in its deck:

The effect of varying \(w_{\rm dds}\) is striking. At \(w_{\rm dds} = 0.5\) and \(w_{\rm dds} = 2.0\), the simulations suggests that using any Holy Smite in your deck is actually detrimental to your performance. Meanwhile, using a balanced weight of 1.0 greatly increases the value of Holy Smite, and it becomes preferable to have about 6 of them in your deck. Let’s analyze this in more details.

In the case of \(w_{\rm dds} = 0.5\), each Holy Smite is valued at 1.9 (remember, the constant is set to 0.9). Thus, the AI is happy to trade (kill) almost any enemy minion it can kill, including a 1-1 minion valued at 2 with the standard enemy board score weights (the AI loses 1.9 because the Holy Smite spell disappears after use, but gains 2 because it killed the enemy minion, a net gain of 0.1). The AI is very trigger happy with Holy Smite and ends up making a lot of unfavorable trades.

In the case of \(w_{\rm dds} = 2.0\), the AI swings to the other extreme. This time, the AI scores a Holy Smite in his hand at 4.9, and therefore will only trade with minions that are 3-2 or stronger. The AI also becomes hesitant to use Holy Smite in combination with a friendly minion to take out an enemy minion. This conservativeness seems to degrade the AI’s performance to the point where the AI is better off not having any Holy Smite in its deck. Often times, the AI finds itself being beaten by weak minions while it hesitates to use Holy Smite because it doesn’t see “value” in it.

So, the best use of Holy Smite seems to be a flexible middle ground strategy. Note though that having too many in the deck turns ugly fast. With this setup, the optimal number seems to be around 6, but this number is likely dependent on the setup, aka the meta.

Look for value, but don’t go hunting for it.

As usual, I look forward to any discussions on All Things Hearthstone.

HearthSim — Tuning the AI

The HearthSim AI is controlled by various model parameters, and it is difficult if not impossible to find the optimal set of parameters that will perform well under all circumstances. So, we need to be able to break down the parameters and understand them in more manageable chunks.

For those of you unfamiliar with the AI model, see this post.

Today, let’s look at the board score weighting. In the model notation, they are \(w_{\rm a}\), \(w_{\rm h}\), \(\tilde{w}_{\rm a}\), and \(\tilde{w}_{\rm h}\).

Roughly speaking, these weights represent how much you value your minions and your opponent’s minions. If your own board score weights (the \(w\)’s without the tilde) are high compared to your opponent’s board score weights (the \(\tilde{w}\)’s), then you (the AI) is not likely to make trades with your minions; the value (the score) that you lose by losing your minions would be much higher than the score you gain by killing your opponent’s minions. On the other hand, if your board score weights are low, you (again, the AI) are likely to make aggressive trades, as such moves will help to maximize your score. In other words, the difference between the opponent board score weight and the self-board score weight determines how aggressive / control-oriented the AI is.

Here is a numerical experiment. We take two players playing standard minion only decks. Player 0 is going to be controlled by an AI with different weights \(w_{\rm a}\) and \(w_{\rm h}\), while player 1 is going to be controlled by the standard AI with \( w_{\rm a} = w_{\rm h} = \tilde{w}_{\rm a} = \tilde{w}_{\rm h} = 1\). Under this circumstance we record the win rate of player 0. For the save of brevity, we will take \(w_{\rm a} = w_{\rm h}\) and call them “aggressiveness.” Note that the higher the aggressiveness, the less willing the AI is to trade minions with the opponent instead of going for the face.

The results are summarized below:

Aggressiveness P0 Win P0 Loss P0 Win % Clopper-Pearson 95% Interval
0.0 1501 8499 15.01% 14.32% — 15.73%
0.1 2477 7553 24.47% 23.36% — 25.32%
0.2 3114 6886 31.14% 30.23% — 32.06%
0.3 3528 6472 35.38% 34.34% — 36.23%
0.4 8032 11968 40.16% 39.48% — 40.84%
0.6 18697 21303 46.74% 46.25% — 47.23%
0.8 19288 20712 48.22% 47.73% — 48.71%
0.9 19870 20130 49.68% 49.18% — 50.17%
1.0 8214 11786 41.07% 40.39% — 41.76%
1.2 14639 25361 36.60% 36.13% — 37.07%
1.5 5868 14134 29.34% 28.71% — 29.97%
2.0 4707 25296 15.68% 15.27% — 16.10%

More visually, below is a plot of P0 win rate:

The results are quite interesting. A super non-aggressive AI (low aggressiveness) tends to fare badly against a standard (aggressiveness = 1.0) opponent. This result makes intuitive sense; the AI is trying too hard to clear the opponent’s board and ends up making unfavorable trades. At the opposite end of the spectrum, a super aggressive AI also doesn’t fare too well. This AI’s problem is that it tries too hard to go for the face, ignoring the opponent’s board, and ends up giving the opponents many opportunities to make very favorable trades. So, logically, it makes sense that the best AI should be somewhere between super-aggresive and super-controling.

The optimal aggressiveness in this setup seems to be around 0.9. To explain this, let’s think about the change in score when a play is made my the AI. When the AI decides to make a minion trade, what happens is that the AI gains score by killing an enemy minion, but also loses score by losing one of its own minion. If the two minions involved in the trade are of the same value (sum of health and attack weighted by their respective weights), the net change in score, assuming aggressiveness of 1, is 0. Thus, in this situation, the AI will think that the trade is not advantageous and will instead go for the face, which is at least score increasing. In summary, an aggressiveness = 1 AI will not make “equal value” trades. On the other hand, an AI with aggressiveness = 0.9 will usually make the equal value trade, because it scores its own minions slightly less than the enemy minions of equal value.

The fact that an aggressiveness = 0.9 AI fares better than an aggressiveness = 0.9 AI tell us that, when there is an equal value trade, it is usually better to make the trade than to go for the face. In fact, the results suggest that it is usually correct to go for a 2 for 1 trade as well, since an aggressiveness = 0.5 AI still performs better than an aggressiveness = 1.0 AI. Somewhat counterintuitive, but now wholly unbelievable.

HearthSim — The Power of “The Coin”

Let’s take a look at the most basic card of all, The Coin.


141

The Coin is a 0 mana spell that gives you one mana crystal upon use. The player who goes second gets The Coin in his starting hand.

How valuable is The Coin? To get a hint, we run HearthSim with the following setup:

  • The simulator modeling here
  • Both players have no hero ability
  • Player 0 and player 1 decks consisting of random sets of minions with attack value between 1 and 4, health value between 1 and 3, and mana cost equal to attack plus health divided by 2.
  • Player 0 goes first, gets 3 cards.
  • Player 1 goes second, gets 4 cards.
  • Both players use the standard AI, with \(w_{\rm a} = w_{\rm h} = \tilde{w}_{\rm a} = \tilde{w}_{\rm h} = 1\)

We compare two cases: Case 1 where Player 1 gets The Coin, and Case 2 where Player1 does not.

The results are summarized below:

  With The Coin Without The Coin
P0 wins 8214 24569
P0 losses 11786 15431
P0 win % 41.07% 61.42%
Clopper-Pearson 95% Interval 40.39% — 41.75% 60.94% — 61.90%

The numbers are striking. The Coin basically makes what used to be a first player advantage into a first player disadvantage. Not bad for a single card!

As usual, I welcome any discussions on All Things Hearthstone!

HearthSim — Intro

HearthSim is a general Hearthstone simulator that I am developing. It is designed to allow one to perform simplified theoretical analysis on Hearthstone decks.

The Setup

The simulator pits two AI controlled players against each other, playing a large number of simulated games. The game setup is similar to that of Hearthstone, but mulliganing is not supported at this time due to the difficulty in modeling it.

The AI Model

Currently, the AI is a simple, rudimentary score maximizing model. On each turn, the AI simulates all the possible moves that can be played and assigns scores to them based on how good the outcome is. Thus, the AI’s performance is going to be determined mostly by the function used to assign the scores.

Scoring Function

Note: Henceforth, variables with tilde denote values associated with the opponent (enemy).

The scoring function can be written as
$$S = S_{\rm b} + \tilde{S}_{\rm b} + S_{\rm c} + S_{\rm h} + \tilde{S}_{\rm h} $$
where \(S_{\rm b}\) is the friendly board score, \(\tilde{S}_{\rm b}\) is the enemy board score, \(S_{\rm c}\) is the hand’s score, \(\tilde{S}_{\rm h}\) is the enemy health score, and \(S_{\rm h}\) is the own hero’s health score. Let’s go through each one in a little more details.

\(S_{\rm b}\) — Friendly board score

The friendly board score is the sum of all attack and health value of the friendly minions that you have out on the battlefield. In other words,
$$S_{\rm b} = \sum_{i} (w_{\rm a} a_{i} + w_{\rm h} h_{i})$$
where, \(a\) is the attack value and \(h\) is the health value for all friendly minion \(i\). The attack and health values are weighted by \(w_a\) and \(w_h\) respectively. The rational for this score function is straight forward. The more attack value and health value your minions have, the better it is for you. The weightings are tunable parameters so that the AI can potentially have different focus: one AI might value health values more and play a vary control oriented game, while another AI can value attack values more and play an aggressive game. The weightings are currently set to 1.

\(\tilde{S}_{\rm b}\) — Enemy board score

The enemy board score is pretty much the same as the friendly board score, except it is the negative of it. That is,
$$\tilde{S}_{\rm b} = -\sum_{j} (\tilde{w}_{\rm a} \tilde{a}_{j} + \tilde{w}_{\rm h} \tilde{h}_{j})$$
for all enemy minion \(j\). In other words, the higher the health and the attack values of your opponent’s minions, the worse it is for you.

\(S_{\rm c}\) — Hand (card) score

In general, the more cards you have in your hand, the better it is for you. So, the hand score is pretty straight forward:
$$S_{\rm c} = w_m \sum_{k} m_{k}$$
where \(m_k\) is the mana cost of card \(k\). It’s debatable whether higher mana cost cards are actually more “valuable” or not, but for now this seems to work pretty well.

\(\tilde{S}_{\rm h}\) — Enemy health score

Most often, the less health the enemy hero has, the better it is for you. The enemy health score captures this observation:
$$\tilde{S}_{\rm h} = \left\{ \begin{array}{1 1} -\tilde{w}_{\rm hh} \tilde{h} && \quad \text{if $\tilde{h} > 0$} \\ \infty && \quad \text{if $\tilde{h} \le 0$} \end{array} \right. $$
where \(\tilde{h}\) is the enemy hero’s health value and the weight \(\tilde{w}_{\rm hh}\) is taken to be \(0.1\). The infinity is there to make sure that if there is a lethal, then it is always picked. In practice, it’s not infinity but rather a large positive number like 1e12.

\(S_{\rm s}\) — Your hero health score

Most often, the more health your hero has, the better. So,
$$S_{\rm s} = \left\{ \begin{array}{1 1} w_{\rm hh} h_{\rm s} && \quad \text{if $h_{\rm s} > 0$} \\ -\infty && \quad \text{if $h_{s} \le 0$} \end{array} \right. $$
where \(h\) is the enemy hero’s health value and the weight \(w_{\rm hh}\) is taken to be \(0.1\). The negative infinity is there to make sure that you never to anything that would kill yourself.

Move Generation

The simulation AI engine tries to brute force all possible moves that one can make given the current hand and board in the current turn. The number of possible moves scale exponentially with the number of minions on board and cards in hand, so the AI is limited to 30 seconds of computation, after which it gives up and plays the best sequence of moves that it has found thus far. On my Macbook Pro, this limits the total number of candidate moves to about 1.5 million moves per turn. It’s fair enough for now… after all, there is a 30 second limit per turn in the real game too.

One thing that the AI does not consider at this time is the possible moves beyond the current turn. I’m working on modeling this, but the sheer number of possibilities prohibit brute force approach to this. For now, it’s ok because I’m only looking at simple, idealized cases, but it’s definitely something I’m working on.

The Code

I plan to open up the code base in the near future.

Discussion

Head over to All Things Hearthstone on Versify to discuss or ask me any questions. I’ll be happy to answer anythings.