Building a Simple Game with an Embedded ESP32 Board
Learn how to build a simple game with an embedded ESP32 board in this article by Agus Kurniawan, an independent technology consultant, author, and lecturer with experience for 18+ years in various software development projects, delivering materials in training and workshop and delivering technical writings.
This article will look at developing a game with an embedded ESP32 board and some embedded modules. Here, you will learn how to work with a joystick, buttons, sound, and an LCD. Before we begin, make sure you have the following things ready:
· A computer with an OS installed, such as Windows, Linux, or mac
· An ESP32 development board – it is recommended to use the ESP-WROVER-KIT v4 board from Espressif
Introducing game-embedded systems
You will be familiar with the Game Boy, an 8-bit handheld game console developed by Nintendo. This console includes a joys...
The Packt podcast features interviews with industry experts and key influencers to help you better understand what’s actually happening in tech. Presented by Stacy Matthews and Richard Gall, the podcast tackles complexity head-on and unpicks the real-world implications of new trends and technologies.
"We're still very far from robots taking over society" | Rowel Atienza on Deep Learning and AI
As your family sits down after dinner and a long day of work, one of the children starts up a conversation with her new connected play doll, while the other begins to watch a movie on the new smart television. The smart thermostat is keeping the living area at steady 22°C, while diverting energy from the rooms that aren't being used at the moment.
Father is making use of the home computer's voice control features, while mother is installing new smart light bulbs that can change color on command or based on variations in the home environment. In the background, the smart refrigerator is transmitting an order for the next-day delivery of groceries.
This setting tells a great story about the consumer Internet of Things (IoT) in that there are exciting new capabilities and conveniences. It also begins to make clear the soon-to-be hyper-connected nature of our homes and environments. If we start to examine these new smart products, we can begin to see the concern surrounding privacy within the IoT.
The privacy challenges with the IoT are enormous, gi...
Amazon Web Services offers a few different container solutions. The one we are going to look at in this section is part of the Amazon Elastic Container Service (ECS) and is called AWS Fargate.
Traditionally, Amazon ECS launches EC2 instances. Once launched, an Amazon ECS agent is deployed alongside a container runtime that allows you to then manage your containers using the AWS Console and command-line tools. AWS Fargate removes the need to launch EC2 instances, allowing you to simply launch containers without having to worry about managing a cluster or having the expense of EC2 instances.
We are going to cheat slightly and work through the Amazon ECS first run process. You can access this by going to the following URL: https://console.aws.amazon.com/ecs/home#/firstRun. This will take us through the four steps we need to take to launch a container within a Fargate cluster.
Once you’ve created a distributed application based on the microservice architecture and architected the application as a set of services, you can deploy each service as a set of service instances to improve throughput and availability. The microservice architecture makes the service deployable and scalable, meaning all service instances are isolated from each other.
The microservice architecture allows us to build and deploy a service quickly. It also allows us to limit the number of resources used, including CPU, memory, and I/O resources. A microservice application has tens of hundreds of services. You can independently increase or decrease resources of a deployment machine based on the usage of a service.
Microservices also allow you to write a service in any language and framework, so you can provide the infrastructure for a service accordingly. You can monitor each service independently and deploy a service according to its behavior.
For example, imagine that you need to run a service with a certain number of instances based on the demand for the service in a business application. With a microservice application, you can easily achieve this by adding multiple VMs or containers for that particular service.
You can also provide the appropriate CPU, memory, and I/O resources for each instance. The challenging aspect of a microservice application is that the service deployment must be fast, reliable, and cost-effective.
There are a few strategies that we can use to deploy the microservices of a distributed application; they are as follows:
According to this strategy, multiple instances of a microservice run on one or more physical or virtual hosts. In this approach, each instance of the service runs on a different, well-known port on one or more virtual or physical machines. This is a very traditional approach to microservice application deployment and is illustrated in the following diagram:
The preceding diagram shows the structure of this pattern. There are two physical or virtual hosts (A and B). These hosts have multiple instances of microservices for our application, and they are Account Service, Order Service, and Book Service.
You can achieve this pattern of microservice deployment using the following methods:
Deploying multiple instances of microservices on the same Apache Tomcat server or in the same JVM
Deploying an instance of a microservice as a JVM process or on an Apache Tomcat server, such as a Tomcat instance, per service instance
This pattern has the following benefits:
It has more efficient resource utilization than other approaches
Deploying a service instance is relatively fast
However, this approach also has the following disadvantages:
There is no isolation between the instances of microservices; therefore, a defective service instance could produce noise or affect other services in the same process
It could create conflict over resource utilization between instances of microservices
It could also cause problems due to a conflict between versions
We can't assign a specific amount of resource utilization, nor can we increase the resource capacity for a specific instance of microservices
It is also difficult to monitor resource utilization independently for one instance of a microservice
As mentioned earlier, this is the traditional approach to deploying microservices, so it has more limitations than the others. Let's now move on to some other approaches.
A single instance of a microservice per host
According to this approach, we deploy a single instance of a microservice on its own single host. A service instance is deployed to its own host and each service instance runs independently. This approach has two specific patterns:
A single instance of a microservice per VM
A single instance of a microservice per container
A host can be a physical machine, a virtual machine, or a container such as a Docker container. The following diagram demonstrates this approach of deploying microservices:
As you can see in the preceding diagram, there is a number of hosts, each of which holds several instances of services. Each service instance has been deployed on its own host machine, that is either a VM or a container. Let's now discuss the benefits and drawbacks of this approach.
This approach has the following benefits:
It provides complete isolation between instances of microservices
We can easily correct a defective service without affecting other services
There is no resource utilization conflict between instances of microservices because each service runs on a separate host using its own resources; in other words, there are no resources shared between instances of microservices
We can assign a specific amount of resources to a microservice instance on demand
We can easily monitor, manage, and redeploy each service instance
However, this approach has the following drawback:
It has less efficient resource utilization compared to with multiple instances of microservices per host
Let's have a look at the two different types of this pattern.
A single instance of a microservice per VM
According to this approach, you can package the service as a VM image and use this to deploy it. The service instance is deployed as a separate VM. For example, we can use an AWS EC2 instance as a VM, as illustrated in the following diagram:
As you can see in the preceding diagram, this pattern packages an instance of the service as a VM image and launches the VM images as a running process, such as the Amazon EC2 AMI.
Many companies use this approach to deploy microservices, such as Netflix, who use this pattern to deploy their video streaming service. Netflix packages an instance of the video streaming service as an EC2 AMI using Aminator, with each instance running as an EC2 instance. Other companies who use this pattern include Boxfuse and Cloud Native.
There are various tools available on the market to package instances of your services as VM images. For example, Jenkins invokes Aminator to build an instance of your service as an EC2 AMI. Similarly, Packer creates VM images through multiple virtualization technologies such as EC2, DigitalOcean, VirtualBox, and VMware.
Let's now move on and have a look at the benefits and drawbacks of this approach.
This approach has the following benefits:
It is easy to scale by increasing the number of instances; if you use this pattern, you can use the power of the mature cloud infrastructure. For example, AWS provides auto-scaling groups to scale the service automatically based on the traffic or load to the service. AWS also provides another useful feature, which is the Elastic Load Balancer.
It is very isolated, which means that each service instance runs independently without being hampered by other services.
Each instance has a fixed amount of resources, such as CPU or memory, and no other service can share its resources.
Deployment is much simpler and more reliable.
A VM encapsulates your services, along with the required technologies inside a virtual box, similar to a black box.
However, this pattern does have the following disadvantages:
Resource utilization is less efficient
Building a VM image is time-consuming
It requires you to build and manage VMs – although there are some tools such as Boxfuse that provide a solution for this
Let's have a look at another, more lightweight approach to deploying microservices: the single instance of microservices per container pattern.
A single instance of microservice per container
According to this approach, each instance of a microservice runs on its own individual lightweight container. The container is nothing but a virtualization mechanism at the operating system level. This means that you can package your service as a container image, such as a Docker image, and you can deploy that image as a container, as illustrated in the following diagram:
As you can see in the preceding diagram, each container is virtualized over the operating system of the VM.
Docker is one of the most popular container-based technologies. Docker provides a way of packaging and deploying services. Each service is packaged as a Docker image, which is then deployed as a Docker container. You can use Docker containers with the following Docker clustering frameworks to manage your containers:
Amazon EC2 Container Service
Docker images have their own port namespace and root filesystem and you can also set a resource utilization limit for each container.
Let's have a look at the benefits and drawbacks of this method.
The benefits of the container approach are similar to those of the VM approach. It also has the following additional advantages:
Unlike VMs, containers are a lightweight technology
Building a container image is much faster than building a VM image; this is because the container doesn't have any lengthy OS boot mechanisms and it starts only the application process, rather than an entire OS
Each service instance is isolated, just like the VM approach
This pattern has the following drawbacks:
Currently, the container infrastructure is not as mature as the infrastructure for VMs
The container infrastructure is not secure as the infrastructure for VMs
Containers don't provide as rich an infrastructure as VMs
It has less efficient resource utilization compared to the multiple services per host pattern because there are more hosts
We've now looked at different approaches to deploying microservices. You can choose either VMs or containers for deploying microservices, according to your requirements.
In this article, we are going to talk about how to collect URLs from the website we would like to scrape.
We will use some simple regex and XPath rules for this, and then we will jump into writing scripts to collect data from the website. We will also play with data, draw some plots, and create some charts.
We will collect a dataset from a blog, which is about big data (www.devveri.com). This website provides useful information about big data and data science domains. It is totally free of charge. People can visit this website and find use cases, exercises, and discussions regarding big data technologies.
Let's start collecting information to find out how many articles there are in each category. You can find this information on the main page of the blog, using the following URL: http://devveri.com/ .
As you see on the left-hand side, there are articles that were published recently. On the right-hand side, you can see categories and article counts of each categories:
To collect the information about how many articles we have for each category, we will use the landing page URL of the website. We will be interested in the right-hand side of the web page shown in the following image:
The following code could be used to load the library and store the URL to the variable:
urls <- "http://devveri.com/"
If we print the URLs variable, it will look like the following image on the R Studio:
Now let's talk about the comment counts of the articles. Because this web page is about sharing useful information about recent technologies regarding current development in the big data and data science domains, readers can easily ask questions to the author or discuss about the article with other readers just by commenting.
Also, it's easy to see comment counts for each article on the category page. You can see one of the articles that was already commented on by readers in the following screenshot. As you can see, this article was commented on three times:
In the following section, we will also write XPath rules to collect this information; then will write an R script and, finally, we will play with the data to create some charts and plots.
Writing XPath rules
In this part, we are going to create our XPath rules to parse the HTML document we will collect:
First of all, we will write XPath rules to collect information from the left-hand side of the web page.
Let's navigate to the landing page of the website devveri.com and use Google Developer Tools to create and test XPath rules.
To use Google Developer Tools, we can right-click on the element that we are interested in.
Click Inspect Element. In the following screenshot, we marked the elements regarding categories:
Let's write XPath rules to get the categories. We are looking for the information about how many articles there are for each category and the name of the categories:
If you type the XPath rule to the console on the Developer Tools, you will get the following elements. As you can see, we have eighteen text elements, because there are eighteen categories shown on the left-hand side of the page:
Let's open a text element and see how it looks. In the next part, we will experience how we can extract this information with R. As you can see from the wholeText section, we only have category names:
Still, we will need to collect article counts for each category:
Use the following XPath rule; it will help to collect this information from the web page:
If you type the XPath rule to the console on the Developer Tools, you will get the following elements:
As you can see, we have 18 text elements, because there are eighteen categories shown on the left-hand side of the page.
Now it's time to start collecting URLs for articles, since in this stage, we are going to collect the comment counts for articles that were written recently. For this issue, it would be good to have the article name and the date of the articles. If we write the name of the first article, we will get the element regarding the name of the article, as shown in the following screenshot:
Let's write XPath rules to get the name of the article:
If you type the XPath rule to the Developer Tools console, you will get the following elements. As you can see, we have 15 text elements, because there are 15 article previews on this page:
Let's open the first text element and see how it looks. As you see, we managed to get the text content that we are interested in. In the next part, we will experience how to extract this information with R:
We have the names of the articles, as we decided we should also collect the date and comment counts of the articles. The following XPath rule will help us to collect the created date of the articles in text format:
If you type the XPath rule on the Developer Tools console, you will get the elements, as shown in the following screenshot. As you can see, we have 15 text elements regarding dates, because there are 15 article previews on this page:
Let's open the first text element and see how it looks. As you can see, we managed to get the text content that we are interested in:
We have the names of the articles and the created dates of the articles. As we decided, we should still collect the comment counts of the articles. The following XPath rule will help us to collect comment counts:
If you type the XPath rule to the Developer Tools console, you will get the elements, as shown in the following screenshot. As you can see, we have 15 text elements regarding comment counts, because there are 15 article previews on this page:
Let's open the first text element and see how it looks. We managed to get the text content that we are interested in:
Writing your first scraping script
Let's start to write our first scraping script using R. In the previous sections, we have already created XPath rules and URLs that we are interested in. We will start by collecting category counts and information about how many articles there are for each article:
First of all, we have called an rvest library using the library function. We should load the rvest library using the following command:
Now we need to create NULL variables, because we are going to save the article count for each categories and the name of the categories.
For this purpose, we are creating category and count variables:
#creating NULL variables
count <- NULL
Now it's time to create a variable that includes the URL that we would like to navigate and collect data. By using the following code block, we are assigning a URL to the URLs variable:
#links for page
urls <- "http://devveri.com/"
Now for the most exciting part: Collecting data!
The following script first visits the URL of the web page, collecting HTML nodes using the read_html function. To parse HTML nodes, we are using XPath rules that we have already created in the previous section. For this issue, we are using the html_nodes function, and we are defining our XPath rules, which we already have inside the function:
We can use the data.frame function to see categories and counts together.
You will get the following result on R, when you run the script on the first line as shown in the following code block:
1 Big Data (11)\n
2 Cloud (3)\n
3 docker (1)\n
4 Doğal Dil İşleme (2)\n
5 ElasticSearch (4)\n
6 Graph (1)\n
7 Haberler (7)\n
8 Hadoop (24)\n
9 HBase (1)\n
10 Kitap (1)\n
11 Lucene / Solr (3)\n
12 Nosql (12)\n
13 Ölçeklenebilirlik (2)\n
14 Polyglot (1)\n
15 Sunum (1)\n
16 Veri Bilimi (2)\n
17 Veri Madenciliği (4)\n
18 Yapay Öğrenme (3)\n
Now it's time to collect the name, comment counts, and the date of the articles that we wrote recently.
We have called the rvest library using the library function and should load the rvest library using the following command:
Now we need to create the NULL variable. Because we are going to save the comment counts, the date, and the name of the articles, we are creating the name, date, and comment_count variables:
#creating NULL variables
name <- NULL
date <- NULL
comment_count <- NULL
The following script first of all visits the URL of the web page, collecting HTML nodes using the read_html function. To parse HTML nodes, we are using XPath rules that we have already created in the previous section. For this issue, we are using the html_nodes function, and we are defining our XPath rules, which we already have inside the function:
We managed to collect the name, comment counts, and the date of the articles:
We can use the data.frame function to see the name, date, and comment_count variables together:
name date comment_count
1 Amazon EMR ile Spark 18 Ocak 2018 0
2 Amazon EMR 13 Ocak 2018 0
3 AWS ile Big Data 11 Ocak 2018 0
4 Apache Hadoop 3.0 10 Ocak 2018 0
5 Big Data Teknolojilerine Hızlı Giriş 19 Haziran 2017 1
6 Günlük Hayatta Yapay Zekâ Teknikleri – Yazı Dizisi (1) 29 Mart
7 Hive Veritabanları Arası Tablo Taşıma 18 Şubat 2016 0
8 Basit Lineer Regresyon 11 Şubat 2016 2
9 Apache Sentry ile Yetkilendirme 10 Ocak 2016 0
10 Hive İç İçe Sorgu Kullanımı 09 Aralık 2015 2
11 Kmeans ve Kmedoids Kümeleme 07 Aralık 2015 0
12 Veri analizinde yeni alışkanlıklar 25 Kasım 2015 0
13 Daha İyi Bir Veri Bilimcisi Olmanız İçin 5 İnanılmaz Yol 02
Kasım 2015 1
14 R ile Korelasyon, Regresyon ve Zaman Serisi Analizleri 12 Ekim
15 Data Driven Kavramı ve II. Faz 28 Eylül 2015 0
Playing with the data
We have two different datasets. We’ve already collected categories and article counts for each category, and we have already collected the name, date, and comment counts of the articles that were written recently.
We should implement basic text manipulation methods to have counts in a more proper format. Because counts look as shown here, we have to apply basic text to get rid of the characters:
[1,] " (11)\n"
[2,] " (3)\n"
[3,] " (1)\n"
[4,] " (2)\n"
[5,] " (4)\n"
[6,] " (1)\n"
[7,] " (7)\n"
[8,] " (24)\n"
[9,] " (1)\n"
[10,] " (1)\n"
[11,] " (3)\n"
[12,] " (12)\n"
[13,] " (2)\n"
[14,] " (1)\n"
[15,] " (1)\n"
[16,] " (2)\n"
[17,] " (4)\n"
[18,] " (3)\n"
We should be replacing "\n", "(" and ")" with "". For this issue, we are going to use the str_replace_all function. To use the str_replace_all function, we need to install the stringr package and load it:
count <- str_replace_all(count,"\\(","")
count <- str_replace_all(count,"\\)","")
count <- str_replace_all(count,"\n","")
Now we have the article counts in a better format. If we create the data frame using the new version of the count variable and article categories, we will get the following result:
1 Big Data 11
2 Cloud 3
3 docker 1
4 Doğal Dil İşleme 2
5 ElasticSearch 4
6 Graph 1
7 Haberler 7
8 Hadoop 24
9 HBase 1
10 Kitap 1
11 Lucene / Solr 3
12 Nosql 12
13 Ölçeklenebilirlik 2
14 Polyglot 1
15 Sunum 1
16 Veri Bilimi 2
17 Veri Madenciliği 4
18 Yapay Öğrenme 3
Let's assign this data frame to a variable and cast the count as numeric, because they are in string format. If we run the following code, we will create a new data frame and convert counts to the numeric format:
categories <- data.frame(category,count)
Now we are ready to create some charts:
To do this, we can use the interactive plotting library of R, plotly.
You can install it using the install.packages("plotly") command.
Then, of course, we have to call this library using the library(plotly) command:
plot_ly(categories, x = ~category, y = ~count, type = 'bar')
The following command will help us to create a bar chart to show article counts for each category:
We can create some charts using our second dataset that is about the date, name, and comment counts of articles that were written recently. If you remember, we already collected the following data for this purpose:
name date comment_count
1 Amazon EMR ile Spark 18 Ocak 2018 0
2 Amazon EMR 13 Ocak 2018 0
3 AWS ile Big Data 11 Ocak 2018 0
We are ready to create our final data frame. But, don't forget, comment counts are still in the string format:
We have to cast them to numeric format. For this purpose, we can use the as.numeric function:
Now we're ready to go! Calcuate the comment counts per date:
To do this, we can use the aggregate function:
avg_comment_counts <- aggregate(comment_count~date, data =
comments, FUN= "mean")
Now we have the daily average comment counts; let's create a line chart to see the changes in the daily average ratings:
plot(avg_comment_counts,type = "l")
The following line chart shows us the average comment counts based on dates:
Now, let's investigate more about the dataset. Seeing the summary statistics of the comment counts would be really good. In this part, we are going to calculate the minimum, maximum, mean, and median value of the comment counts and then create bar chart that shows those summary statistics.
By using the following commands, we can calculate those summary statistics:
Now that we have the summary statistics, we can create a bar chart using those values by using the following commands. Because on our plot there will be more than one different category, we are going to use the add_trace function:
plot_ly(x = "min", y = summary$min_comment_count, type = 'bar',name='min') %>%
add_trace(x = "max", y = summary$max_comment_count, type = 'bar',name='max')%>%
add_trace(x = "avg", y = summary$avg_comment_count, type = 'bar',name='average')%>%
add_trace(x = "median", y = summary$median_comment_count, type = 'bar',name='median')
As you can see, this bar chart is a summary of the statistics of the daily average of the ratings:
That’s it! If you enjoyed reading this article and want to learn more about web scraping with R, you can explore R Web Scraping Quick Start Guide. Written by Olgun Aydin, a PhD candidate at the Department of Statistics at Mimar Sinan University, R Web Scraping Quick Start Guide is for R programmers who want to get started quickly with web scraping, as well as data analysts who want to learn scraping using R.
In this article, you’ll see how you can exploit the DOM. DOM traversal entails getting to the desired element with the help of either XPaths or CSS. It is possible to traverse the DOM in a forward and backward direction with XPaths but traversal through XPaths is slow compared to CSS. Traversal using CSS can only be done in the forward direction. In order to traverse the DOM, using either XPaths or CSS, we need to understand the By class.
Dissecting the By class
The By class is an abstract class that has eight static methods and eight inner classes. Let's understand the structure of the By class.
The following code skeleton shows a fragment of the structure of the By class: