Linux Screen

Below are the most basic steps for getting started with screen:

  1. On the command prompt, type screen.
  2. Run the desired program.
  3. Use the key sequence Ctrl-a + d to detach from the screen session.
  4. Reattach to the screen session by typing screen -r.
  5. Terminate screen session: exit or Ctrl-d

Screen or GNU Screen, is a terminal multiplexer. In other words, it means that you can start a screen session and then open any number of windows ( virtual terminals ) inside that session. Processes running in Screen will continue to run when their window is not visible even if you get disconnected.

The screen package is pre-installed on most Linux distros nowadays. You can check if it is installed on your system by typing:

screen --version

If you don’t have screen installed on your system, you can easily install it using the package manager of your distro.

sudo apt install screen

To start a screen session, simply type screen in your console:

screen

Copy

This will open a screen session, create a new window and start a shell in that window.

Now that you have opened a screen session you can get a list of commands by typing:

Ctrl+a ?

Named sessions are useful when you run multiple screen sessions. To create a named session, run the screen command with the following arguments:

screen -S session_name

It’s always a good idea to choose a descriptive session name.

When you start a new screen session by default it creates a single window with a shell in it.

To create a new window with shell type Ctrl+a c, the first available number from the range 0...9 will be assigned to it.

Below are some most common commands for managing Linux Screen Windows:

  • Ctrl+a c Create a new window (with shell)
  • Ctrl+a " List all window
  • Ctrl+a 0 Switch to window 0 (by number )
  • Ctrl+a A Rename the current window
  • Ctrl+a S Split current region horizontally into two regions
  • Ctrl+a | Split current region vertically into two regions
  • Ctrl+a tab Switch the input focus to the next region
  • Ctrl+a Ctrl+a Toggle between current and previous region
  • Ctrl+a Q Close all regions but the current one
  • Ctrl+a X Close the current region

You can detach from the screen session at anytime by typing:

Ctrl+a d

The program running in the screen session will continue to run after you detach from the session.

To resume your screen session use the following command:

screen -r

Copy

In case you have multiple screen sessions running on you machine you will need to append the screen session ID after the r switch.

To find the session ID list the current running screen sessions with:

screen -ls

If you want to restore screen 10835.pts-0, then type the following command:

screen -r 10835

Dockerize PostgreSQL

Deploying PostgreSQL on a Docker Container

https://severalnines.com/blog/deploying-postgresql-docker-container

Install PostgreSQL on Docker

Assuming there is no Docker image that suits your needs on the Docker Hub, you can create one yourself.

Start by creating a new Dockerfile:

Note: This PostgreSQL setup is for development-only purposes. Refer to the PostgreSQL documentation to fine-tune these settings so that it is suitably secure.

#
# example Dockerfile for https://docs.docker.com/engine/examples/postgresql_service/
#

FROM ubuntu

# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8

# Add PostgreSQL's repository. It contains the most recent stable release
#     of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list

# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
#  There are some warnings (in red) that show up during the build. You can hide
#  them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3

# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``

# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres

# Create a PostgreSQL role named ``docker`` with ``docker`` as the password and
# then create a database `docker` owned by the ``docker`` role.
# Note: here we use ``&&\`` to run commands one after the other - the ``\``
#       allows the RUN command to span multiple lines.
RUN    /etc/init.d/postgresql start &&\
    psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
    createdb -O docker docker

# Adjust PostgreSQL configuration so that remote connections to the
# database are possible.
RUN echo "host all  all    0.0.0.0/0  md5" >> /etc/postgresql/9.3/main/pg_hba.conf

# And add ``listen_addresses`` to ``/etc/postgresql/9.3/main/postgresql.conf``
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf

# Expose the PostgreSQL port
EXPOSE 5432

# Add VOLUMEs to allow backup of config, logs and databases
VOLUME  ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]

# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

Build an image from the Dockerfile and assign it a name.

$ docker build -t eg_postgresql .

Run the PostgreSQL server container (in the foreground):

$ docker run --rm -P --name pg_test eg_postgresql

There are two ways to connect to the PostgreSQL server. We can use Link Containers, or we can access it from our host (or the network).

Note: The --rm removes the container and its image when the container exits successfully.

Use container linking

Containers can be linked to another container’s ports directly using -link remote_name:local_alias in the client’s docker run. This sets a number of environment variables that can then be used to connect:

$ docker run --rm -t -i --link pg_test:pg eg_postgresql bash

postgres@7ef98b1b7243:/$ psql -h $PG_PORT_5432_TCP_ADDR -p $PG_PORT_5432_TCP_PORT -d docker -U docker --password

Connect from your host system

Assuming you have the postgresql-client installed, you can use the host-mapped port to test as well. You need to use docker ps to find out what local host port the container is mapped to first:

$ docker ps

CONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                                      NAMES
5e24362f27f6        eg_postgresql:latest   /usr/lib/postgresql/   About an hour ago   Up About an hour    0.0.0.0:49153->5432/tcp                    pg_test

$ psql -h localhost -p 49153 -d docker -U docker --password

Test the database

Once you have authenticated and have a docker =#prompt, you can create a table and populate it.

psql (9.3.1)
Type "help" for help.

$ docker=# CREATE TABLE cities (
docker(#     name            varchar(80),
docker(#     location        point
docker(# );
CREATE TABLE
$ docker=# INSERT INTO cities VALUES ('San Francisco', '(-194.0, 53.0)');
INSERT 0 1
$ docker=# select * from cities;
     name      | location
---------------+-----------
 San Francisco | (-194,53)
(1 row)

Use the container volumes

You can use the defined volumes to inspect the PostgreSQL log files and to backup your configuration and data:

$ docker run --rm --volumes-from pg_test -t -i busybox sh

/ # ls
bin      etc      lib      linuxrc  mnt      proc     run      sys      usr
dev      home     lib64    media    opt      root     sbin     tmp      var
/ # ls /etc/postgresql/9.3/main/
environment      pg_hba.conf      postgresql.conf
pg_ctl.conf      pg_ident.conf    start.conf
/tmp # ls /var/log
ldconfig    postgresql

Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfileso it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app.

The Compose file is a YAML file defining services,networks and volumes. The default path for a Compose file is ./docker-compose.yml.

A service definition contains configuration that is applied to each container started for that service, much like passing command-line parameters todocker container create. Likewise, network and volume definitions are analogous todocker network create and docker volume create.

As with docker container create, options specified in the Dockerfile, such as CMDEXPOSEVOLUMEENV, are respected by default – you don’t need to specify them again in docker-compose.yml.

You can use environment variables in configuration values with a Bash-like ${VARIABLE} syntax.

build

Configuration options that are applied at build time.

build can be specified either as a string containing a path to the build context:

version: '3'
services:
  webapp:
    build: ./dir

Or, as an object with the path specified under context and optionally Dockerfile and args:

version: '3'
services:
  webapp:
    build:
      context: ./dir
      dockerfile: Dockerfile-alternate
      args:
        buildno: 1

If you specify image as well as build, then Compose names the built image with the webapp and optional tag specified in image:

build: ./dir
image: webapp:tag

This results in an image named webapp and tagged tag, built from ./dir.

CONTEXT

Either a path to a directory containing a Dockerfile, or a url to a git repository.

When the value supplied is a relative path, it is interpreted as relative to the location of the Compose file. This directory is also the build context that is sent to the Docker daemon.

Compose builds and tags it with a generated name, and uses that image thereafter.

build:
  context: ./dir

DOCKERFILE

Alternate Dockerfile.

Compose uses an alternate file to build with. A build path must also be specified.

build:
  context: .
  dockerfile: Dockerfile-alternate

command

Override the default command.

command: bundle exec thin -p 3000

The command can also be a list, in a manner similar to dockerfile:

command: ["bundle", "exec", "thin", "-p", "3000"]

configs

Grant access to configs on a per-service basis using the per-service configs configuration. Two different syntax variants are supported.

image

Specify the image to start the container from. Can either be a repository/tag or a partial image ID.

image: redis
image: ubuntu:14.04
image: tutum/influxdb
image: example-registry.com:4000/postgresql
image: a4bc65fd

If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.

volumes

Mount host paths or named volumes, specified as sub-options to a service.

You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.

But, if you want to reuse a volume across multiple services, then define a named volume in the top-levelvolumes key.

Note: The top-level volumes key defines a named volume and references it from each service’s volumes list. This replaces volumes_from in earlier versions of the Compose file format.

An entry under the top-level volumes key can be empty, in which case it uses the default driver configured by the Engine (in most cases, this is the local driver).

environment

Add environment variables. You can use either an array or a dictionary. Any boolean values; true, false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the YML parser.

Environment variables with only a key are resolved to their values on the machine Compose is running on, which can be helpful for secret or host-specific values.

environment:
  RACK_ENV: development
  SHOW: 'true'
  SESSION_SECRET:

environment:
  - RACK_ENV=development
  - SHOW=true
  - SESSION_SECRET

Note: If your service specifies a build option, variables defined in environment are not automatically visible during the build. Use the args sub-option of build to define build-time environment variables.

Compose supports declaring default environment variables in an environment file named .env placed in the folder where the docker-compose command is executed (current working directory).

These syntax rules apply to the .env file:

  • Compose expects each line in an env file to be in VAR=VAL format.
  • Lines beginning with # are processed as comments and ignored.
  • Blank lines are ignored.
  • There is no special handling of quotation marks. This means that they are part of the VAL.

Note: Values present in the environment at runtime always override those defined inside the .env file. Similarly, values passed via command-line arguments take precedence as well.

Substitute environment variables in Compose files

It’s possible to use environment variables in your shell to populate values inside a Compose file:

web:
  image: "webapp:${TAG}"

Set environment variables in containers

You can set environment variables in a service’s containers with the ‘environment’ key, just like withdocker run -e VARIABLE=VALUE ...:

web:
  environment:
    - DEBUG=1

Pass environment variables to containers

You can pass environment variables from your shell straight through to a service’s containers with the ‘environment’ key by not giving them a value, just like with docker run -e VARIABLE ...:

web:
  environment:
    - DEBUG

The value of the DEBUG variable in the container is taken from the value for the same variable in the shell in which Compose is run.

The “env_file” configuration option

You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option, just like with docker run --env-file=FILE ...:

web:
  env_file
    -web-variables.env

Set environment variables with ‘docker compose run’

Just like with docker run -e, you can set environment variables on a one-off container with docker-compose run -e:

docker-compose run -e DEBUG=1 web python console.py

You can also pass a variable through from the shell by not giving it a value:

docker-compose run -e DEBUG web python console.py

The value of the DEBUG variable in the container is taken from the value for the same variable in the shell in which Compose is run.

The “.env” file

You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env:

$ cat .env
TAG=v1.5

$ cat docker-compose.yml
version: '3'
services:
  web:
    image: "webapp:${TAG}"

When you run docker-compose up, the web service defined above uses the image webapp:v1.5. You can verify this with the config command, which prints your resolved application config to the terminal:

$ docker-compose config

version: '3'
services:
  web:
    image: 'webapp:v1.5'

Values in the shell take precedence over those specified in the .env file. If you set TAG to a different value in your shell, the substitution in image uses that instead:

$ export TAG=v2.0
$ docker-compose config

version: '3'
services:
  web:
    image: 'webapp:v2.0'

When you set the same environment variable in multiple files, here’s the priority used by Compose to choose which value to use:

  1. Compose file
  2. Shell environment variables
  3. Environment file
  4. Dockerfile
  5. Variable is not defined

In the example below, we set the same environment variable on an Environment file, and the Compose file:

$ cat ./Docker/api/api.env
NODE_ENV=test

$ cat docker-compose.yml
version: '3'
services:
  api:
    image: 'node:6-alpine'
    env_file:
     - ./Docker/api/api.env
    environment:
     - NODE_ENV=production

When you run the container, the environment variable defined in the Compose file takes precedence.

$ docker-compose exec api node

> process.env.NODE_ENV
'production'

Having any ARG or ENV setting in a Dockerfileevaluates only if there is no Docker Compose entry for environment or env_file.

 

GitHub Go

  • GoRequest — Simplified HTTP client

https://github.com/parnurzeal/gorequest

  • Gin — HTTP web framework

https://github.com/gin-gonic/gin

Gin Web Framework Document:

https://gin-gonic.com/zh-cn/docs/

  • Structured, pluggable logging

https://github.com/sirupsen/logrus

  • Pretty printer

https://github.com/davecgh/go-spew

  • JSON web token

https://github.com/dgrijalva/jwt-go

  • Postgres driver for sql

https://github.com/lib/pq

  • Build command line apps

https://github.com/urfave/cli

  • Go web programming

https://github.com/sausheong/gwp

  • Go dot env file

https://github.com/joho.godotenv

  • Go cmp

https://github.com/google/go-cmp

Docker Development Best Practices

The following development patterns have proven to be helpful for people building applications with Docker.

How to keep your images small

Small images are faster to pull over the network and faster to load into memory when starting containers or services. There are a few rules of thumb to keep image size small:

  • Start with an appropriate base image. For instance, if you need a JDK, consider basing your image on the official openjdk image, rather than starting with a generic ubuntu image and installing openjdk as part of the Dockerfile.
  • Use multistage builds. For instance, you can use the maven image to build your Java application, then reset to the tomcat image and copy the Java artifacts into the correct location to deploy your app, all in the same Dockerfile. This means that your final image doesn’t include all of the libraries and dependencies pulled in by the build, but only the artifacts and the environment needed to run them.
    • If you need to use a version of Docker that does not include multistage builds, try to reduce the number of layers in your image by minimizing the number of separate RUN commands in your Dockerfile. You can do this by consolidating multiple commands into a single RUN line and using your shell’s mechanisms to combine them together. Consider the following two fragments. The first will create two layers in the image, while the second will only create one.
      RUN apt-get -y update
      RUN apt-get install -y python
      
      RUN apt-get -y update && apt-get install -y python
      
  • If you have multiple images with a lot in common, consider creating your own base image with the shared components, and basing your unique images on that. Docker only needs to load the common layers once, and they will be cached. This means that your derivative images use memory on the Docker host more efficiently and load more quickly.
  • To keep your production image lean but allow for debugging, consider using the production image as the base image for the debug image. Additional testing or debugging tooling can be added on top of the production image.
  • When building images, always tag them with useful tags which codify version information, intended destination (prod or test, for instance), stability, or other information that will be useful when deploying the application in different environments. Do not rely on the automatically-created latest tag.

Where and how to persist application data

  • Avoid storing application data in your container’s writable layer using storage drivers. This increases the size of your container and is less efficient from an I/O perspective than using volumes or bind mounts.
  • Instead, store data using volumes.
  • One case where it is appropriate to use bind mounts is during development, when you may want to mount your source directory or a binary you just built into your container. For production, use a volume instead, mounting it into the same location as you mounted a bind mount during development.
  • For production, use secrets to store sensitive application data used by services, and use configs for non-sensitive data such as configuration files. If you currently use standalone containers, consider migrating to use single-replica services, so that you can take advantage of these service-only features.

Use swarm services when possible

  • When possible, design your application to be able to scale using swarm services.
  • Even if you only need to run a single instance of your application, swarm services provide several advantages over standalone containers. A service’s configuration is declarative, and Docker is always working to keep the desired and actual state in sync.
  • Networks and volumes can be connected and disconnected from swarm services, and Docker handles redeploying the individual service containers in a non-disruptive way. Standalone containers need to be manually stopped, removed, and recreated to accommodate configuration changes.
  • Several features, such as the ability to store secrets and configs, are only available to services rather than standalone containers. These features allow you to keep your images as generic as possible and to avoid storing sensitive data within the Docker images or containers themselves.
  • Let docker stack deploy handle any image pulls for you, instead of using docker pull. This way, your deployment won’t try to pull from nodes that are down. Also, when new nodes are added to the swarm, images are pulled automatically.

There are limitations around sharing data amongst nodes of a swarm service. If you use Docker for AWS or Docker for Azure, you can use the Cloudstor plugin to share data amongst your swarm service nodes. You can also write your application data into a separate database which supports simultaneous updates.

Use CI/CD for testing and deployment

  • When you check a change into source control or create a pull request, use Docker Cloud or another CI/CD pipeline to automatically build and tag a Docker image and test it. Docker Cloud can also deploy tested apps straight into production.
  • Take this even further with Docker EE by requiring your development, testing, and security teams to sign images before they can be deployed into production. This way, you can be sure that before an image is deployed into production, it has been tested and signed off by, for instance, development, quality, and security teams.

Differences in development and production environments

Development Production
Use bind mounts to give your container access to your source code. Use volumes to store container data.
Use Docker for Mac or Docker for Windows. Use Docker EE if possible, with userns mapping for greater isolation of Docker processes from host processes.
Don’t worry about time drift. Always run an NTP client on the Docker host and within each container process and sync them all to the same NTP server. If you use swarm services, also ensure that each Docker node syncs its clocks to the same time source as the containers.

Best practices for writing Dockerfiles

Learning Docker

Installing Docker on Ubuntu

https://docs.docker.com/install/linux/docker-ce/ubuntu/

Manage Docker as a non-root user

https://docs.docker.com/install/linux/linux-postinstall/

An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.

container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.

Containers run apps natively on the host machine’s kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.

Basic Docker commands

docker build -t friendlyname .  # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyname  # Run "friendlyname" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyname         # Same thing, but in detached mode
docker container ls                                # List all running containers
docker container ls -a             # List all containers, even those not running
docker container stop <hash>           # Gracefully stop the specified container
docker container kill <hash>         # Force shutdown of the specified container
docker container rm <hash>        # Remove specified container from this machine
docker container rm $(docker container ls -a -q)         # Remove all containers
docker image ls -a                             # List all images on this machine
docker image rm <image id>            # Remove specified image from this machine
docker image rm $(docker image ls -a -q)   # Remove all images from this machine
docker login             # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag  # Tag <image> for upload to registry
docker push username/repository:tag            # Upload tagged image to registry
docker run username/repository:tag                   # Run image from a registry

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Lastly, run docker-compose up and Compose will start and run your entire app.

Using Docker

  • An image is every file that makes up just enough of the operating system to do what you need to do:
$ docker images
  • Run an image. it stands for interactive terminal. Docker Run takes images to containers and Docker Commit takes containers back to new images. In Docker, programs run in containers which are comprised of images.
$ docker run -it ubuntu:latest bash -c "sleep 3; echo all done"
$ docker commit admiring_tharp my-image
$ docker commit admiring_tharp my-image:v1.2
  • Check running images, all stopped and running containers, last using ones. A Docker container continues to run until the process that started it exits.
$ docker ps
$ docker ps -a
$ docker ps -l
  • Delete the container afterwards: -rm. Start a container, it will sit there for five seconds and then exit.
$ docker run --rm -it ubuntu sleep 5
  • Running in the background (detached)
$ docker run -d -it ubuntu bash
  • Attach a container
$ docker attach suspicious_williams
  • Exit the container by detaching you from it, but leaves it running: Ctrl + P, Ctrl + Q
  • Start another process in an existing container (great for debugging and DB administration). When the original container exits (Ctrl + D), the attached process  using exec dies with the original container.
$ docker exec -it suspicious_williams bash
  • View the output of containers
$ docker logs container_name
  • Stop a container
$ docker kill container_name
  • Remove a container.
$ docker rm container_name
  • Remove an image
$ docker rmi image-name:tag
$ docker rmi image-id
  • Share folders with the host so that data is persistent on the host after containers go away
$ mkdir example
$ docker run -it -v /home/xxx/example:/shared-folder ubuntu bash
  • Share data between container. Shared “discs” that exist only as long as they are being used. Volumes are ephemeral. They can be passed from one container to the next, but they are not saved; they will eventually go away.
$ docker run -it -v /shared-data ubuntu bash
$ docker run -it --volumes-from machine_with_shared_volumes ubuntu bash
  • To make port 80 inside a container accessible from the internet on port 8080. Map TCP port 80 in the container to port 8080 on the Docker host.
$ docker run -p 8080:80
  • When you link two containers together, you link all their ports but only one way. You are connecting from the client to the server, but the server doesn’t know when a client connects to it or goes away or whatever. Only for services that cannot ever be run on different machines, such as a service and its health check.
    • Run a server
      • docker run -it --rm --name server ubuntu bash
      • nc -lp 1234
    • Start another container to be client
      • docker run --rm -it --link server --name client ubuntu bash
      • # nc server 1234
      • Automatically assigns a hostname. The links can break when containers restart.
      • # cat /etc/hosts
  • To use a private network, you must “add – -net=network-name” to both the client and server.
  • Docker registries mange and distribute images. Finding ubuntu images:
$ docker search ubuntu
  • Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
$ docker login
$ docker pull debian:sid
$ docker tag debian:sid weicode/test-image:v99.9
$ docker push weicode/test-image:v99.9

Building Docker Images

  • Dockerfile is a small “program” to create an image. You run this program with
$ docker build -t name-of-result .
  • Dockerfiles are not shell scripts.
    • Processes you start on one line will not be running on the next line. You can’t treat it like a shell script and say, start a program on one line, then send a message to that program on the next line. The programming won’t be running.
    • Environment variables you set will be set on the next line. If you use the ENV command, remember that each line is its own call to docker run.
    • Each line of a docker file makes a new independent image based on the previous line’s image.
    • The Docker file WORKDIR command changes directories both for the rest of the Docker file, and the finished image.
  • Basic Dockerfile
    • Put this in a file named Dockerfile:
      • FROM busybox
        RUN echo "building simple docker image."
        CMD echo "hello container"
    • Now build it
      • $ docker build -t hello .
    • Now run it
      • $ docker run --rm hello
  • Installing a program with Docker Build
    • Put this in a file named Dockerfile:
      • FROM debian:sid
        RUN apt-get -y update
        RUN apt-get install nano
        CMD ["/bin/nano", "/tmp/notes"]
    • Now build it
      • $ docker build -t example/nanoer .
    • Now run it
      • $ docker run --rm -it example/nanoer
  • Adding a file through Docker Build
    • Put this in a Dockerfile:
      • FROM example/nanoer
        ADD notes.txt /notes.txt
        CMD ["/bin/nano", "/notes.txt"]
      • Now build it
        • $ docker build -t example/notes .
      • Now run it
        • $ docker run --rm -it example/notes
  • Dockerfile reference: https://docs.docker.com/engine/reference/builder/
    • CMD command sets the program to run when the container starts.
    • The Docker file RUN command starts a program that runs only for one line of the Docker file.
    • The Docker file ENV command set environment variables both in the rest of the Docker file, and in the finished image.
  • Preventing the Golden Image Problem
    • Include installers in your project
    • Have a canonical build that builds everything completely from scratch
    • Tag your builds with the git hash of the code that built it
    • Use small base images, such as Alphine
    • Build images you share publicly from Dockerfiles, always

Under the Hood

  • Docker uses bridges and NAT to create virtual Ethernet networks in your computer. The bridges are software switches that control the Ethernet layer.
  • Docker port forwarding
    • sudo iptalbes -n -L -t nat
  • Find the process id of the main process in the container
    • $ docker inspect --format '{{.State.Pid}}' hello
  • The cgroups Linux kernel feature is essential for container process isolation.
  • Docker images are read only.

Orchestration: Building Systems with Docker

  • Set with registry an image will be uploaded to: docker tag
  • Docker Compose
    • Single machine coordination
    • Designed for testing and development
    • Brings up all your containers, volumes, networks, etc., with one command
  • Kubernetes
    • Containers run programs
    • Pods group containers together
    • Services make pods available to others
    • Labels are used for every advanced service discovery
    • Makes scripting large operations possible with the kubectl command
    • Very flexible overlay networking
    • Runs equally well on your hardware or a cloud provider

Dangling images are not referenced by other images and are safe to delete. If you have a lot of them, it can be really tedious to remove them, but lucky for us Docker has a few commands to help us eliminate dangling images.

In older versions of Docker (and this still works today), you can delete dangling images on their own by running

docker rmi -f $(docker images -f "dangling=true" -q).

Cloud Native Go

Simple Go Microservices

Simple Go HTTP Server Implementation

  • Using the Go net/http package
  • Implementing and start a simple HTTP server
  • Defining simple handler functions
    “github.com/PacktPublishing/Cloud-Native-Go/api”
)
package main

import (
    "fmt"
    "net/http"
    "os"
    "github.com/PacktPublishing/Cloud-Native-Go/api"
)

func main() {
    http.HandleFunc("/", index)
    http.HandleFunc("/api/echo", api.EchoHandleFunc)

    http.HandleFunc("/api/hello", api.HelloHandleFunc)

    http.HandleFunc("/api/books", api.BooksHandleFunc)
    http.HandleFunc("/api/books/", api.BookHandleFunc)
    http.ListenAndServe(port(), nil)
}

func port() string {
    port := os.Getenv("PORT")
    if len(port) == 0 {
        port = "8080"
    }
    return ":" + port
}

func index(w http.ResponseWriter, r *http.Request) {
    w.WriteHeader(http.StatusOK)
    fmt.Fprintf(w, "Welcome to Cloud Native Go (Update).")
}

To be added…

Tenable.io API

Authorizing You App to Use the Tenable.io API

Requests that require authentication need an API key to be sent with their headers.

API Keys

These keys are generated per account through session:keys or users:keys and can be used to authenticate without creating a session.

Add them to your request using the following HTTP header:

X-ApiKeys: accessKey={accessKey}; secretKey={secretKey};

Example:

curl -H "X-ApiKeys: accessKey={accessKey}; secretKey={secretKey}" https://cloud.tenable.com/scans

In addition, it is possible to perform actions as if authenticated as a different user by adding an additional HTTP header.

X-Impersonate: username={username}

Example:

curl -H "X-Impersonate: username={username}" -H "X-ApiKeys: accessKey={accessKey}; secretKey={secretKey}" https://cloud.tenable.com/scans

Generating API Keys

NOTICE: API Keys are only presented upon initial generation. Please store them in a safe location as they can not be retrieved later and will need to be regenerated if lost.

The Tenable.io API UI can help you build a sufficient foundation so that you can then perform more complex requests via other API utilities such as cURL or Postman. As usual, authentication is necessary with these utilities before the requests for data will work. A POST to <https://cloud.tenable.com/session> with your credentials in the body will give you the session token you need to perform data queries. The cURL request for getting your authenticated session token would look similar to this, but with your own credentials (note that because this is a POST request to an https URL, your credentials are not being transmitted in the clear!):

curl -X POST -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"username":"sample@tenableio.user", "password":"YourPasswordHere"}' "https://cloud.tenable.com/session"

Once you have the session token, you can perform queries to accomplish any of the tasks available via the API UI. In the API UI, look at the HTTP Request information to get the proper method and URL syntax to run a query. For example, this query would provide you with your list of scans and details about them:

curl -X GET -H "X-Cookie: token=YourSessionTokenHere" -H "Cache-Control: no-cache" "https://cloud.tenable.com/scans"

A request like this one would return your list of target groups:

curl -X GET -H "X-Cookie: token=YourSessionTokenHere" -H "Cache-Control: no-cache" "https://cloud.tenable.com/target-groups"

And this request closes down your session:

curl -X DELETE -H "X-Cookie: token=YourSessionTokenHere" -H "Cache-Control: no-cache" "https://cloud.tenable.com/session"

Any of these cURL requests can be easily modified for use in Postman or scripting in various languages.

Go

Web Frameworks

The Go Playground
Note: The Go Playground does not allow HTTP requests.
net.LookupHost(“www.google.com”) will return “Protocol not available”
Effective Go
Go pkg
Go Database
Other resources:

Go Channels

source

Channels in Go are the first class members of the language. They provide a mechanism to not just communicate between concurrently executing go-routines, but they also provide a way to synchronize these go-routines.

Assuming that the channels are not buffered, every operation on a channel is blocked until both the writer and the reader are ready to proceed in a lock-step manner. Here is the link to the excellent talk by Rob Pike describing this behavior of channels.

In the diagram below we have a main process that spawns a go-routines at time t0. These two go-routines then communicate with each other using a channel at time t1. The important part to understand is that the write and read operations on the channel happen in a synchronized manner between these two independently executing code paths.

Now that we know how channels work let’s look at the select statements that allow us to do event capturing. We will be using channels and selectstatements to solve the balanced-parentheses problem later.

Select Statements

Select statements are an interesting way to provide a control flow logic in go-code based on an event. It allows us to define cases, much like switchstatements, but these cases trigger on the event such as when communication occurs on a particular channel.

select {
case <-a:
   // when communication happens on channel a
case <-b:
   // when communication happens on channel b
case <-c:
   // when communication happens on channel c
default:
   // when none of the other channels are ready for communication
}

The default case is of great use. It allows us to write a non-blocking code and define logic in the event communication cannot proceed on any of the other cases.

Go spec defines the behavior of default case as follows:

If one or more of the communications can proceed, a single one that can proceed is chosen via a uniform pseudo-random selection. Otherwise, if there is a default case, that case is chosen. If there is no default case, the “select” statement blocks until at least one of the communications can proceed.

In other words, we could use the default case to check for situations where channels are not ready to either send or receive data. We will use this fact in our problem to check for the presence of matching parenthesis.

Framing the Problem

In order to solve the balanced parenthesis problem using channels, we would essentially be spawning go-routines every time we encounter an opening brace ( while traversing the input string. Each of these go-routines can then be read back every time we encounter a closing brace ). Furthermore, we could break out of the loop if we can’t read from a channel when we expect we should using the default case.

For our case, with only one type of parenthesis, it would not matter what we send on the channel. Just the fact that the communication “event” happens is the key. So let’s create a channel to send and receive empty structs.

As a side note, empty structs are truly zero cost abstractions in Go.

c := make(chan struct{})

We would then traverse the input string and spawn go-routines every time we encounter an opening brace (:

go func(){
   c <- struct{}{}
}()

Furthermore, we would read from the channel every time we encounter a closing brace ):

<-c

However, while we are always able to create go-routines, we may not always have something to read. So reading from channel <-c may block and that is an indication that we received a closing brace ) without a matching opening brace (. As I mentioned earlier, we could use default case to simply return false in such cases.

select {
case <-c:
default:
   return false
}

Putting it all together

Now that we have the basic framework, our function to check if the input string has balanced parentheses would look something as follows:

func isBalanced(input string) bool {
   c := make(chan struct{})

   for i := range input {
      switch input[i] {
      case '(':
         go func(){
            c <- struct{}{}
         }()
      case ')':
         select {
         case <-c:
         default:
            return false
         }
      default:
      }
   }
   // finally we flip the behavior of default case and 
   // return false if we could read from the channel
   select {
   case <-c:
      return false
   default:
      return true
   }
}

The code above may seem conceptually correct but it has a bug. Try this on the playground. It fails! This is precisely the point I wanted to highlight in this post. So let’s look at why the code fails.

Below are two cases indicating what would happen if we try to read from channel when the go-routine control flow is at a point when it tries to write to the channel (top case), as opposed to the bottom case when main tries to read from the channel too soon.

In the first case, default does not get selected, in the second case it does… and that is where the problem lies. While channels do allow synchronization, it is subtle to guarantee it during the phase when two independently executing go-routines are at different stages in their control flow and have not arrived at a point where the control flow blocks for synchronization.

Communicating too soon could be an issue

We can control this issue by introducing a second channel. The job of the second channel would be to provide some blocking at the time of go-routine creation.

func isBalanced(input string) bool {
   c := make(chan struct{})
   w := make(chan struct{})

   for i := range input {
      switch input[i] {
      case '(':
         go func() {
            w <- struct{}{}
            c <- struct{}{}
         }()
         <-w
      case ')':
         select {
         case <-c:
         default:
            return false
         }
      default:
      }
   }

   select {
   case <-c:
      return false
   default:
      return true
   }
}

By forcing a communication to occur between the newly spawned go-routine and the main using <-w, we ensure that the go-routine is at least at a control flow just before it writes to the channel c using c <- struct{}{}.

Try this on playground, all seems to work now.

Summary

We used a simple problem to poke into behavior of channels in Go. It matters how we write code to guarantee the timing of communication using channels. We identified a pattern to force a communication between go-routines in order to provide sufficient delay in the control flow such that we don’t trigger on default select cases when we don’t want to.


Go Notes

  • Go is a compiled, statically typed language.
  • The go tool can run a file without precompiling.
  • Compiled executables are OS specific
    • go build xxx.go to compile xxx.go to OS specific executable
  • Applications have a statically linked runtime.
  • No external virtual machine is needed
  • What Go doesn’t support
    • Type inheritance (no classes)
    • Method or operator overloading
    • Structured exception handling
    • Implicit numeric conversions
  • Go syntax rule
    • Go is case sensitive
    • Variable and package names are lower and mixed case.
    • Exported functions and fields have an initial upper-case character (An initial uppercase character means that a field or a method is available to the rest of the application. It’s the equivalent of the public keyword in other languages and a lower case initial character means that it’s not available to the rest of the application. )
    • Semicolons not needed
    • Code blocks are wrapped with braces
  • godoc fmt to see document about fmt
  • gofmt -w badformatting.go to format go code properly
  • Convert an integer to a float: float64(aNumber)
  • Print data type: fmt.Printf(“Data type: %T”, myData)
  • Variables
    • Explicitly typed declarations
      • Use var keyword and = assignment operator
      • var anInteger int = 42
    • Implicitly typed declarations
      • Use := assignment operator without var
      • anInteger := 42
  • Constants
    • A constant is a simple, unchanging value
    • Explicit typing
      • const anInteger int = 42
    • Implicit typing:
      • const aString = “This is Go!”
  • Array (fixed size):
    • var colors [3]string
    • var a [10]int
    • var numbers = [5]int{5,3,1,2,4}
  • Slices (dynamic size):
    • The type []T is a slice with elements of type T
    • A slice is formed by specifying two indices, a low and high bound, separated by a colon :
      • a[low:high]
      • This selects a half-open range which includes the first element, but excludes the last one.
      • a[1:4] creates a slice which includes elements 1 through 3 of a
    •  var colors = []string{“red”,”green”,”blue”}
    • colors = append(colors, “purple”)
    • A slice does not store any data. It just describes a section of an underling array. Changing the elements of a slice modifies the corresponding elements of its underlying array. Other slices that share the same underlying array will see those changes.
    • A slice has both a length and capacity.
      • The length of a slice is the number of elements it contains.
      • The capacity of a slice is the number of elements in the underlying array, counting from the first element in the slice.
      • The length and capacity of a slice s can be obtained using the expressions len(s) and cap(s).
    • Slices can be created with the built-in make function; this is how you create dynamically-sized arrays.
      • a := make([]int, 5) // len(a)=5
      • b := make([]int, 0, 5) // len(b)=0, cap(b)=5
  • Use make() to allocate and initialize memory
    • m := make(map[string]int)
    • m[“key”] = 42
    • fmt.Println(m)
  • Delete key from map
    • delete(mapName, keyName)
  • Due to the lexer’s sensitivity to line feeds, you must place the else keyword on the same line as the preceding closing brace. 
    • if x < 0 {
    •     result = “Less than zero”
    • } else {
    •     result = “Greater than or equal to zero”
    • }
  • Go is organized with packages and packages have functions. Your own application has its own package, always named main, and it also has the main function which is called automatically by the run time as the application starts up. 
  • If you have arguments of the same type, you can pass in the list of arguments and only declare the type once, after the last argument in the list. 
    • func addValues(value1, value2 int) int {
    •     return value1 + value2
    • }
  • You can also create functions that accept arbitrary numbers of values as long as they’re all of the same type. You declare the name of the values you’re passing in, then three dots and then the type. 
    • func addAllValues(values ...int) int {
        sum := 0
        for i := range values {
          sum += values[i]
        }
        return sum
      }
  • Also all of the above functions that I’ve created here start with a lower case initial character. That makes them private to the current package,that is they aren’t exported for use outside this package. If you change their initial character to upper case though, it then becomes public, accessible to the rest of the application.
  • Deferred function calls are pushed onto a stack. When a function returns, its deferred calls are executed in last-in-first-out order.

tag for a field allows you to attach meta-information to the field which can be acquired using reflection. Usually it is used to provide transformation info on how a struct field is encoded to or decoded from another format (or stored/retrieved from a database), but you can use it to store whatever meta-info you want to, either intended for another package or for your own use.

As mentioned in the documentation of reflect.StructTag, by convention the value of a tag string is a space-separated list of key:"value" pairs, for example:

type User struct {
    Name string `json:"name" xml:"name"`
}

The key usually denotes the package that the subsequent "value" is for, for example json keys are processed/used by the encoding/json package.

If multiple information is to be passed in the "value", usually it is specified by separating it with a comma (','), e.g.

Name string `json:"name,omitempty" xml:"name"`

Usually a dash value ('-') for the "value" means to exclude the field from the process (e.g. in case of json it means not to marshal or unmarshal that field).

https://stackoverflow.com/questions/10858787/what-are-the-uses-for-tags-in-go

 

Edit your ~/.bashrc to add the following line:

export GOPATH=$HOME/go

The command go env GOPATH prints the effective current GOPATH; it prints the default location if the environment variable is unset.

For convenience, add the workspace’s bin subdirectory to your PATH:

$ export PATH=$PATH:$(go env GOPATH)/bin

The scripts in the rest of this document use $GOPATH instead of $(go env GOPATH) for brevity. To make the scripts run as written if you have not set GOPATH, you can substitute $HOME/go in those commands or else run:

$ export GOPATH=$(go env GOPATH)

Getting Started with Go

// Print a friendly greeting

package main

import (
    "fmt"
)

func main() {
    fmt.Println("Welcome Gophers!")
}

The main package has a special meaning in Go and it will make your code compile for an executable and not the package.

Go comes with a start library which contains many packages. The fmt package contains functions for formatted printing.

We define a function with the func keyword. The body of the function is enclosed in curly braces. The function main also has a special meaning in Go. It will be executed by the Go runtime when the program starts. The println function from the fmt package prints its argument in a new line. You need to prefix the function name println with the package it came from.

Unlike C++ or Java, you don’t need to place a semicolon at the end of the line. Strings in Go starts and end with double quotes. Go strings are Unicode, which means you don’t need special code to support non-English languages.

The Go Tools

Go comes with a tool called Go. You use the Go tool for most everyday tasks such as building, testing, benchmarking, and starting third party packages and more.

go run welcome.go

Once you are done developing, you’d like to build an executable that you can distribute. This is done with the go build command.

go build welcome.go

You created a file called welcome. On Windows, it will be welcome.exe.

./welcome

Go has built-in test suite. Go can also run benchmarks so you ‘ll be able to measure performance of your code. Go fmt will format your code.

Go Basics

Numbers and Assignments

//Calculate the mean of two numbers
package main
import (
     "fmt"
)

func main() {
     var x int
     var y int

     x = 1
     y = 2

     fmt.Printf("x=%v, type of %T\n", x, x)
     fmt.Printf("y=%v, type of %T\n", y, y)

     var mean int
     mean = (x + y) / 2
     fmt.Printf("result: %v, type of %T\n", mean, mean)
}

We declare two variables, x and y, of type int. Unlike C or Java, the type comes after the variable name. In Go, you have int8, int16, int32, int64, and the unsigned versions of all of them. The int type, without size, depends on the system you are using, and is the one you will usually use. If you don’t assign a value to a variable, Go will assign the zero value for this type.

The result of the above function will be one, not 1.5 as expected. The reason is that integer division returns an integer. We could make x, y and mean float64 to get desired result. Go’s type system is very strict. It doesn’t allow to add integer to a float. That’s why we converting everything to floats.

The Go compiler can infer the type of variables for the programmer. It has a syntax for creating variable and assigning to it in one line using the colon equal operator.

x := 1.0
y := 2.0

mean := (x + y) / 2.0

Conditionals

There are two ways to specify conditions in Go, if and switch.

// Example of "if" statement
package main

import (
     "fmt"
)

func main() {
     x := 10

     if x > 5 {
          fmt.Println("x is big")
     }

     if x > 5 && x < 15 {
          fmt.Println("x is just right")
     }

     if x < 20 || x > 30 {
          fmt.Println("x is out of range")
     }
}

Unlike Java or C++, you don’t need parenthesis around the condition.

// Example of "switch" statement

package main

import (
     "fmt"
)

func main() {
     x := 2

     switch x {
     case 1:
          fmt.Println("one")
     case 2:
          fmt.Println("two")
     case 3:
          fmt.Println("three")
     default:
          fmt.Printf("many")
     }

     switch {
     case x > 100:
          fmt.Println("x is very big")
     case x > 10:
          fmt.Println("x is big")
     default:
          fmt.Println("s is small")
     }
}

Unlike some other languages, we don’t have to specify break after each case. A case value also don’t have to be a number. It can be a string or something else. We could use switch without an expression. Then the case statement will have a condition.

For Loops

FizzBuzz problem: start with a case of dividing by three and five before the other cases.

package main

import (
     "fmt"
)

func main() {
     // for every number from 1 to 20
     for i := 1; i <= 20; i++ {
          if i%3 == 0 && i%5 == 0 {
               // if the number is divisible by 3 and 5 print fizz buzz
               fmt.Println("fizz buzz")
          } else if i%3 == 0 {
               // else if the number is divisible by 3 print fizz
               fmt.Println("fizz")
          } else if i%5 == 0 {
               // else if the number is divisible by 5 print buzz
               fmt.Println("buzz")
          } else {
               // else print the number
               fmt.Println(i)
          }
     }
}

String

A string is defined with double quote. Strings in Go are immutable. You can not chagne them once you’ve created them. To access parts of a string we can use slicing, such as book[:4], book[4:], book[4:11]. We can use the plus sign to concatenate two strings. We can construct multiple line strings with a backtick sign.

// Go strings
package main

import (
     "fmt"
)

func main() {
     book := "The colour of magic"
     fmt.Println(book)

     fmt.Println(len(book))

     fmt.Printf("book[0] = %v (type %T)\n", book[0], book[0])

      // strings in go are immutable
      // book[0] = 116

      // Slice (start, end), 0 based, 1/2 empty range
      // We'll get index number four but we won't get number 11
      fmt.Println(book[4:11])

      // Slice (no end)
      fmt.Println(book[4:])

      // Slice (no start)
      // Get everything after four and not including the 4th character
      fmt.Println(book[:4])

      // Use + to concatenate strings
      fmt.Println("t" + book[1:])

      // Multi line
      poem := `
      The road goes ever on
      Down from the door where it began
      ...
      `
      fmt.Println(poem)
}

Use Sprintf to convert number to a string. An even-ended number is a number with the same first and last digits, such as 1, 11, 121, etc. How many even-ended numbers result from multiplying two four-digit numbers? For example, if you multiply 1001 by 1011, you will get a number which is even-ended. It’s easier to check if a number is even-ended, by converting it to a string. This can be done by the fmt Sprintf function.

// fmt.Sprintf example
package main

import (
     "fmt"
)

func main() {
     n := 42
     s := fmt.Sprintf("%d", n)

     fmt.Printf("s = %s (type %T)\n", s, s)

     // Print quotes around string
     fmt.Printf("s = %q (type %T)\n", s, s)
}
// Even-ended numbers
package main

import (
     "fmt"
)

func main() {
     // count = 0
     count := 0

     // for every pair of 4 digit numbers
     for a := 1000; a <= 9999; a++ {
          for b := a; b <= 9999; b++ { // don't count twice
               n := a*b

               // if a*b even ended
               s := fmt.Sprintf("%d",n)
               if s[0] == s[len(s)-1] {
                    count++

               }
          }
     }

     // print count
     fmt.Printf(count)
}

Slices

A slice is a sequence of items. All items in a slice must be of the same type.

package main

import (
     "fmt"
)

func main() {
     // Same type, a slice of string
     loons := []string{"bugs", "daffy", "taz"}
     fmt.Printf("loons = %v (type %T)\n", loons, loons)

     // Length
     fmt.Println(len(loons)) //3

     fmt.Println("----")
     // 0 indexing
     fmt.Println(loons[1]) // daffy

     fmt.Println("----")
     //slices
     fmt.Println(loons[1:]) // [daffy taz]

     fmt.Println("----")
     // for
     for i := 0; i < len(loons); i++ {
          fmt.Println(loons[i])
     }

     fmt.Println("----")     
     // Single value range
     for i := range loons {
          fmt.Println(i)
     }

     fmt.Println("----") 
     // Double value range
     for i, name := range loons {
          fmt.Printf("s% at %d\n", name, i)
     }

     fmt.Println("----")
     // Double value range, ignore index by using _
     for _, name := range loons {
          fmt.Println(name)
     }

     fmt.Println("----")
     // append
     loons = append(loons, "elmer")
     fmt.Println(loons) // [bugs daffy taz elmer]
}

There is another type in Go called array, and slices are built on top of arrays. However, in practice, you’ll seldom use arrays.

// Calculate maximal value in a slice
package main

import (
     "fmt"
)

func main() {
     nums := []int{16, 8, 42, 4, 23, 15}
     max := nums[0] // Initialize max with first value
     // [1:] skips the first value
     for _, value := range nums {
          if value > max {
               max = value
          }
     }
     fmt.Println(max)
}

Map

A map is a data structure where keys points the values. In go, the keys must be of the same type and the values must be of the same type. 

// Go's map data structure
package main

import (
     "fmt"
)

func main() {
     stocks := map[string]float64{
          "AMZN": 1699.8,
          "GOOG": 1129.19,
          "MSFT": 98.61, // Must have trailing comma in multi line
     }

     // Number of items
     fmt.Println(len(stocks))

     // Get a value
     fmt.Println(stocks["MSFT"])

     // Get zero value if not found
     fmt.Println(stocks["TSLA"]) // 0

     // Use two value to see if found
     // ok variable will either get true or false depending 
     // if the key is inside a map or not.
     value, ok := stocks["TSLA"]
     if !ok {
          fmt.Println("TSLA not found")
     } else {
          fmt.Println(value)
     }

     // Set
     stocks["TSLA"] = 322.12
     fmt.Println(stocks)

     // Delete
     delete(stocks, "AMZN")
     fmt.Println(stocks)

     // Single value "for" is on keys
     for key := range stocks {
          fmt.Println(key)
     }

     // Double value "for" is key, value
     for key, value := range stocks {
          fmt.Printf("%s -> %.2f\n", key, value)
     }
}

Count how many times each word appears in a text.

package main

import (
    "fmt"
    "strings"
)

func main() {
    text := `
    Needles and pins
    Needles and pins
    Sew me a sail
    To catch me the wind
    `

        // split the text to words using the strings.Fields function
    words := strings.Fields(text)
    counts := map[string]int{} // Empty map
    for _, word := range words {
        counts[strings.ToLower(word)]++
    }

    fmt.Println(counts)
}

Functions

Defining Function

Unlike many other languages, Go Functions can return more than one value. 

// Basic function definition
package main

import (
    "fmt"
)

// add adds a to b
func add(a int, b int) int {
    return a + b
}

// divmod returns quotient and reminder
func divmod(a int, b int) (int, int) {
    return a / b, a % b
}

func main() {
    val := add(1, 2)
    fmt.Println(val)

    div, mod := divmod(7, 2)
    fmt.Printf("div=%d, mod=%d\n", div, mod)
}

Parameter Passing

The following function takes a slice of integers and an index and doubles the value at that index.

package main

import (
    "fmt"
)

func doubleAt(values []int, i int) {
    values[i] *= 2
}

func main() {
    values := []int{1, 2, 3, 4}
    doubleAt(values, 2)
    fmt.Println(values)
}

When Go passes an integer to a function, Go passes it by value which means Go will create a copy of this integer and pass it to the function. Any changes to the integer inside the function won’t affect the original value. However, when Go passes a slice or a map to a function, it passes it by reference.This means that inside the function body, you’re working with the exact same object that was passed and any changes to the slice you make inside the function stay after the function is over. Go’s pointers are not like C or C++. They are much safer. You can pass a pointer to an object but you can’t do the infamous pointer arithmetic.

package main

import (
    "fmt"
)

func doubleAt(values []int, i int) {
    values[i] *= 2
}

func double(n int) {
    n *= 2
}

func doublePtr(n *int) {
    *n *= 2
}

func main() {
    values := []int{1, 2, 3, 4}
    doubleAt(values, 2)
    fmt.Println(values)

    val := 10
    double(val)
    fmt.Println(val)
    doublePtr(&val)
    fmt.Println(val)
}

Error Return

Go functions can return more than one value. This is used extensively in Go to signal errors. The function that can error will usually return the error value as the last value returned. Here we have an sqrt function, which calculates the square root of a number. But, unlike the one in the math standard library this one will return error on negative numbers. As we can see, we return two values. One is float 64, which is the result, and the other one is of type error. Error is a built-in type that’s used throughout Go. First we check. If n is smaller than zero, we return 0.0 because we have to return some value for the float, but also use the fmt.Errorf function to create a new error. We’ll reach line 13 if n is not negative, and then we return the square root of n and nil. Nil is the value Go uses to signal nothing. It is very much like null or none in other languages.

package main

import (
    "fmt"
    "math"
)

func sqrt(n float64) (float64, error) {
    if n < 0 {
        return 0.0, fmt.Errorf("sqrt of negative value (%f)", n)
    }

    return math.Sqrt(n), nil
}

func main() {
    s1, err := sqrt(2.0)
    if err != nil {
        fmt.Printf("ERROR: %s\n", err)
    } else {
        fmt.Println(s1)
    }

    s2, err := sqrt(-2.0)
    if err != nil {
        fmt.Printf("ERROR: %s\n", err)
    } else {
        fmt.Println(s2)
    }
}

Defer

Go has a garbage collector. Which means you don’t have to deal with memory management. When you allocate an object and then stop using it, Go’s garbage collector will clear it up. However, memory is just one kind of resource. And you may have other resources you use in your program. For example, files, sockets, virtual machines and others. You’d like to make sure that these resources are closed when you’re done with them as well. To make sure a resource is closed, use defer. 

package main

import (
    "fmt"
)

func cleanup(name string) {
    fmt.Printf("Cleaning up %s\n", name)
}

func worker() {
    defer cleanup("A")
    defer cleanup("B")

    fmt.Println("worker")
}

func main() {
    worker()
}

What’s nice about defer is that you write it just after you acquire the resource. This way, you don’t forget to make sure it’s freed. Defer the code in reverse order. Let’s add another defer and see. So, we’ll also free the resource B. We’ll save and run. (typing) And we see that the first we got the cleaning of B, and then cleaning of A,and of course, the worker code is happening before any defer.

Write a function that gets a URL and returns the value of Content-Type response HTTP header. The function should return an error if it can’t perform a GET request. The signature of the function is contentType, it gets a URL as a string, and returns a string and an error. Use net HTP package GET function to make an HTP call. Use the resp.Header.Get method to get the value of a header. And make sure the response body is closed properly.

// Writing a function that return Content-Type header
package main

import (
    "fmt"
    "net/http"
)

// contentType will return the value of Content-Type header returned 
// by making an HTTP GET request to url
func contentType(url string) (string, error) {
    resp, err := http.Get(url)
    if err != nil {
        return "", err
    }

    defer resp.Body.Close() // Make sure we close the body

    ctype := resp.Header.Get("Content-Type")
    if ctype == "" { // Return error if Content-Type header not found
        return "", fmt.Errorf("can't find Content-Type header")
    }

    return ctype, nil
}

func main() {
    ctype, err := contentType("https://linkedin.com")
    if err != nil {
        fmt.Printf("ERROR: %s\n", err)
    } else {
        fmt.Println(ctype)
    }
}

Object-Oriented

Structs

In Go you’ll use Struct to combine several fields into a datatype.

// struct demo
package main

import (
    "fmt"
)

// Trade is a trade in stocks
type Trade struct {
    Symbol string  // Stock symbol
    Volume int     // Number of shares
    Price  float64 // Trade price
    Buy    bool    // true if buy trade, false if sell trade
}

func main() {
    t1 := Trade{"MSFT", 10, 99.98, true}
    fmt.Println(t1)

    fmt.Printf("%+v\n", t1)

    fmt.Println(t1.Symbol)

    t2 := Trade{
        Symbol: "MSFT",
        Volume: 10,
        Price:  99.98,
        Buy:    true,
    }
    fmt.Printf("%+v\n", t2)

    t3 := Trade{}
    fmt.Printf("%+v\n", t3)
}

Methods

Structs are nice for organizing data, but they have more power. We can define methods on structs.

// Method demo
package main

import (
    "fmt"
)

// Trade is a trade in stocks
type Trade struct {
    Symbol string  // Stock symbol
    Volume int     // Number of shares
    Price  float64 // Trade price
    Buy    bool    // true if buy trade, false if sell trade
}

// Value returns the trade value
func (t *Trade) Value() float64 {
    value := float64(t.Volume) * t.Price
    if t.Buy {
        value = -value
    }

    return value
}

func main() {
    t := Trade{
        Symbol: "MSFT",
        Volume: 10,
        Price:  99.98,
        Buy:    true,
    }
    fmt.Println(t.Value())
}

The value method got the pointer to a trade as a receiver. You’ll usually use a pointer receiver. Let’s see why. I’ve opened point.go from the exercise files. Assume we define a point which has an X and a Y. And then, we define a function for moving the point with delta X and delta Y. And at main, we create a point, and we move it. What do you think will be printed out when we try it out? So let’s save it, and run it. Go run point dot go. And we got X is one, and Y is two, even though we expected X to be one plus two, which is three, and Y to be five. Why is that? The reason is that you’re not using a pointer receiver, which means your move method will get a copy of the point struct. This is why you’ll usually use the pointer receiver.

// Receiver example
package main

import (
    "fmt"
)

// Point is a 2d point
type Point struct {
    X int
    Y int
}

// Move moves the point
func (p *Point) Move(dx int, dy int) {
    p.X += dx
    p.Y += dy
}

func main() {
    p := &Point{1, 2}
    p.Move(2, 3)
    fmt.Printf("%+v\n", p)
}

New Structs with Functions

If you’re coming from an object-oriented language, such as Python, Java, C++, and others, you are used to having a constructor or initializer method that is called when an object is created. In Go, you write a function, usually starting with New, that returns a new object. This New function usually returns the pointer to the created object, and optionally, an error value, if it’s possible there was an error creating the object.

// Method demo
package main

import (
    "fmt"
    "os"
)

// Trade is a trade in stocks
type Trade struct {
    Symbol string  // Stock symbol
    Volume int     // Number of shares
    Price  float64 // Trade price
    Buy    bool    // true if buy trade, false if sell trade
}

// NewTrade will create a new trade and will validate the input
func NewTrade(symbol string, volume int, price float64, buy bool) (*Trade, error) {
    if symbol == "" {
        return nil, fmt.Errorf("symbol can't be empty")
    }

    if volume <= 0 {
        return nil, fmt.Errorf("volume must be >= 0 (was %d)", volume)
    }

    if price <= 0.0 {
        return nil, fmt.Errorf("price must be >= 0 (was %f)", price)
    }

    trade := &Trade{
        Symbol: symbol,
        Volume: volume,
        Price:  price,
        Buy:    buy,
    }
    return trade, nil
}

// Value returns the trade value
func (t *Trade) Value() float64 {
    value := float64(t.Volume) * t.Price
    if t.Buy {
        value = -value
    }

    return value
}

func main() {
    t, err := NewTrade("MSFT", 10, 99.98, true)

    if err != nil {
        fmt.Printf("error: can't create trade - %s\n", err)
        os.Exit(1)
    }

    fmt.Println(t.Value())
}

Square

Define a Square struct, which has two fields: center of type point and length of type int. Add two methods:

Move(dx int, dy int)
Area() int

Also write:

NewSquare(x int, y int, length int)(*Square, error)

Point

/ Point is a 2D point
type Point struct {
     X int
     Y int
}

// Move moves the point
func (p *Point) Move(dx int, dy int) {
     p.X += dx
     p.Y += dy
}

 

package main

import (
    "fmt"
    "log"
)

// Point is a 2d point
type Point struct {
    X int
    Y int
}

// Move moves the point
func (p *Point) Move(dx int, dy int) {
    p.X += dx
    p.Y += dy
}

// Square is a square
type Square struct {
    Center Point
    Length int
}

// NewSquare returns a new square
func NewSquare(x int, y int, length int) (*Square, error) {
    if length <= 0 {
        return nil, fmt.Errorf("length must be > 0")
    }

    s := &Square{
        Center: Point{x, y},
        Length: length,
    }

    return s, nil
}

// Move moves the square
func (s *Square) Move(dx int, dy int) {
    s.Center.Move(dx, dy)
}

// Area returns the square are
func (s *Square) Area() int {
    return s.Length * s.Length
}

func main() {
    s, err := NewSquare(1, 1, 10)
    if err != nil {
        log.Fatalf("ERROR: can't create square")
    }

    s.Move(2, 3)
    fmt.Printf("%+v\n", s)
    fmt.Println(s.Area())
}

Interfaces

An interface is a collection of methods. A method must match in name and also in signature, parameters, and return value. In our case, the interface is just one method, which is area without arguments and returning float64, and both circle and square satisfy this interface. Here’s how we can use it. We create a square and we create a circle. And now, we create a slice of shapes that holds both the circle and the square. And then we can call sumAreas on the slice of shapes and print out the total area.

package main

import (
    "fmt"
    "math"
)

// Square is a square
type Square struct {
    Length float64
}

// Area returns the area of the square
func (s *Square) Area() float64 {
    return s.Length * s.Length
}

// Circle is a circle
type Circle struct {
    Radius float64
}

// Area returns the curcle of the square
func (c *Circle) Area() float64 {
    return math.Pi * c.Radius * c.Radius
}

// sumAreas return the sum of all areas in the slice
func sumAreas(shapes []Shape) float64 {
    total := 0.0

    for _, shape := range shapes {
        total += shape.Area()
    }

    return total
}

// Shape is a shape interface
type Shape interface {
    Area() float64
}

func main() {
    s := &Square{20}
    fmt.Println(s.Area())

    c := &Circle{10}
    fmt.Println(c.Area())

    shapes := []Shape{s, c}
    sa := sumAreas(shapes)
    fmt.Println(sa)
}

Write a struct called Capper that has a field to another io.Writer and transforms everything written to uppercase. Capper should implement io.Writer.

type Capper struct {
     wtr io.Writer
}
func (c *Capper) Write(p []byte) (n int, err error) {
     // Your code goes here
}
package main

import (
    "fmt"
    "io"
    "os"
)

// Capper implements io.Writer and turns everything to uppercase
type Capper struct {
    wtr io.Writer
}

func (c *Capper) Write(p []byte) (n int, err error) {
    diff := byte('a' - 'A')
    out := make([]byte, len(p))
    for i, c := range p {
        if c >= 'a' && c <= 'z' {
            c -= diff
        }
        out[i] = c
    }

    return c.wtr.Write(out)
}

func main() {
    c := &Capper{os.Stdout}
    fmt.Fprintln(c, "Hello there")
}

Error Handling

Install pkg/errors:

go get github.com/pkg/errors

Print to standard error and log file that includes stack trace:

// pkg/errors example

package main

import (
    "fmt"
    "log"
    "os"
    "github.com/pkg/errors"
)

// Config holds configuration
type Config struct {
    // configuration fields go here (redacted)
}

func readConfig(path string) (*Config, error) {
    file, err := os.Open(path)
    if err != nil {
        return nil, errors.Wrap(err, "can't open configuration file")
    }

    defer file.Close()

    cfg := &Config{}
    // Parse file here (redacted)

    return cfg, nil
}

func setupLogging() {
    out, err := os.OpenFile("app.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
    if err != nil {
        return
    }

    log.SetOutput(out)
}

func main() {
    setupLogging()
    cfg, err := readConfig("/path/to/config.toml")
    if err != nil {
        fmt.Fprintf(os.Stderr, "error: %s\n", err)
        log.Printf("error: %+v", err)
        os.Exit(1)
    }

    // Normal operation (redacted)
    fmt.Println(cfg)

}

Write a killServer(pidFile string) error function that reads a process identifier from pidFile, converts it to an integer, and prints “killing <pid>” (instead oof using os.Kill).

  • Use github.com/pkg/errors to wrap errors
  • Use io/ioutil ReadFile to read the file
  • Use strconv.Atoi to convert the file content to an integer
 strings.TrimSpace(string(data))
    pid, err := strconv.Atoi(strPID)
    if err != nil {
        return errors.Wrap(err, "bad process ID")
    }

    // Simulate kill
    fmt.Printf("killing server with pid=%d\n", pid)
    return nil
}

func main() {
    if err := killServer("server.pid"); err != nil {
        fmt.Fprintf(os.Stderr, "error: %s\n", err)
        os.Expackage main

import (
    "fmt"
    "io/ioutil"
    "log"
    "os"
    "strconv"
    "strings"
    "github.com/pkg/errors"
)

func killServer(pidFile string) error {
    data, err := ioutil.ReadFile(pidFile)
    if err != nil {
        return errors.Wrap(err, "can't open pid file (is server running?)")
    }

    if err := os.Remove(pidFile); err != nil {
        // We can go on if we fail here
        log.Printf("warning: can't remove pid file - %s", err)
    }

    strPID :=it(1)
    }

}

Concurrency

Goroutines

One of Go’s many advantages over traditional languages, is the way it can handle concurrency. To quote Rob Pike, in his excellent Concurrency is not Parallelism talk, which I highly recommend you watch, concurrency is the composition of independently executed processes, while parallelism, is the simultaneous execution. Go’s concurrency primitive is a go routine. It’s very lightweight, and unlike processes or threads, you can spin tens of thousands of go routines on a single machine. 

// Get content type of sites

package main

import (
    "fmt"
    "net/http"
    "sync"
)

func returnType(url string) {
    resp, err := http.Get(url)
    if err != nil {
        fmt.Printf("error: %s\n", err)
        return
    }

    defer resp.Body.Close()

    ctype := resp.Header.Get("content-type")

    fmt.Printf("%s -> %s\n", url, ctype)
}

func main() {
    urls := []string{
        "https://golang.org",
        "https://api.github.com",
        "https://httpbin.org/xml",
    }

    var wg sync.WaitGroup

    for _, url := range urls {
        wg.Add(1)
        go func(url string) {
            returnType(url)
            wg.Done()
        }(url)
    }
    wg.Wait()
}

Channel

The preferred way to communicate between go routines is by using channels. Channels are typed, one-directional pipes. You send values at one end and receive them in the other end. If you try to receive and there’s nothing in the channel, you’ll get blocked. The pushing side is a bit more complicated. There are actually two kinds of channels, buffered and unbuffered. When you push a value through an unbuffered channel you’ll get blocked until someone receives on the other hand. With buffer channels it’s a bit different. Each buffer channel has a capacity. If you send to the buffer channel until the capacity is filled you won’t block. After the capacity is full, ascent to a channel will be blocked. In a way, buffer channels are somewhat like bonded queues.

// channels

package main

import (
    "fmt"
    "time"
)

func main() {
    ch := make(chan int)
    // This will block
    /*
        <-ch
        fmt.Println("Here")
    */

    go func() {
        // Send number of the channel
        ch <- 353
    }()

    // Receive from the channel
    val := <-ch
    fmt.Printf("got %d\n", val)
    fmt.Println("-----")
    // Send multiple
    go func() {
        for i := 0; i < 3; i++ {
            fmt.Printf("sending %d\n", i)
            ch <- i
            time.Sleep(time.Second)
        }
    }()

    for i := 0; i < 3; i++ {
        val := <-ch
        fmt.Printf("received %d\n", val)
    }

    fmt.Println("-----")

    // close to signal we're done
    go func() {
        for i := 0; i < 3; i++ {
            fmt.Printf("sending %d\n", i)
            ch <- i
            time.Sleep(time.Second)
        }
        close(ch)
    }()

    for i := range ch {
        fmt.Printf("received %d\n", i)
    }
}

Select

The built-in select function allow us to work with several channels together. Every time a channel with selecting on become free, either for sending or receiving, you’ll do an action on this channel.

// select example

package main

import (
    "fmt"
    "time"
)

func main() {
    ch1, ch2 := make(chan int), make(chan int)
    go func() {
        ch1 <- 42
    }()

    select {
    case val := <-ch1:
        fmt.Printf("got %d from ch1\n", val)
    case val := <-ch2:
        fmt.Printf("got %d from ch2\n", val)
    }

    fmt.Println("----")
    out := make(chan float64)

    go func() {
        time.Sleep(100 * time.Millisecond)
        out <- 3.14
    }()

    select {
    case val := <-out:
        fmt.Printf("got %f\n", val)
    case <-time.After(20 * time.Millisecond):
        fmt.Println("timeout")
    }
}

Project Management

Imports

To install packages, we use the go get command.

go get github.com/pelletier/go-toml

When you input something, you can give it a name. The name could be the underscore character to denote that this package will not be explicitly used in this file. The Go compiler will fail if any imported packages are unused.

import (
    "fmt"
    "log"
    "os"

    toml "github.com/pelletier/go-toml"

)

Renaming packages also allows us to use two packages with the same name in the same program. Once a package is installed, we can only use functions and variables that start with a capital letter in this package.Everything else is private, and can be used only from inside the package.

Once your program is built, all the packages it depends on are packed into the executable. This means when you deploy a program, you don’t need any other installation step.

Manage Requirements

The Go Get tool installs one dependency at a time. And it installs the latest version of the package. When working as a team, we’d like a way to document dependencies and their versions. Using a specific version of the package will also protect you from breaking changes the package might have. In Go 111, we got a new mod tool to handle dependency management.

Create a module for package with the name cfg.

go mod init cfg
module cfg

require github.com/pelletier/go-toml v1.2.0

Testing

Test files in Go end with the suffix _test.go. For example, use sqrt_test.go to test sqrt.go.

package sqrt

import (
    "errors"
)

// Common errors
var (
    ErrNegSqrt    = errors.New("sqrt of negative number")
    ErrNoSolution = errors.New("no solution found")
)

// Abs returns the absolute value of val
func Abs(val float64) float64 {
    if val < 0 {
        return -val
    }
    return val
}

// Sqrt return the square root of a number
func Sqrt(val float64) (float64, error) {
    if val < 0.0 {
        return 0.0, ErrNegSqrt
    }
    if val == 0.0 {
        return 0.0, nil // shortcut
    }

    guess, epsilon := 1.0, 0.00001
    for i := 0; i < 10000; i++ {
        if Abs(guess*guess-val) <= epsilon {
            return guess, nil
        }
        guess = (val/guess + guess) / 2.0
    }

    return 0.0, ErrNoSolution

}

sqrt_test.go:

package sqrt

import (
    "fmt"
    "testing"
)

func almostEqual(v1, v2 float64) bool {
    return Abs(v1-v2) <= 0.001
}

func TestSimple(t *testing.T) {
    val, err := Sqrt(2)
    if err != nil {
        t.Fatalf("error in calculation - %s", err)
    }

    if !almostEqual(val, 1.414214) {
        t.Fatalf("bad value - %f", val)
    }
}

type testCase struct {
    value    float64
    expected float64
}

func TestMany(t *testing.T) {
    testCases := []testCase{
        {0.0, 0.0},
        {2.0, 1.414214},
        {9.0, 3.0},
    }

    for _, tc := range testCases {
        t.Run(fmt.Sprintf("%f", tc.value), func(t *testing.T) {
            out, err := Sqrt(tc.value)
            if err != nil {
                t.Fatal("error")
            }

            if !almostEqual(out, tc.expected) {
                t.Fatalf("%f != %f", out, tc.expected)
            }
        })
    }
}

If you’d like to see the test names, you can pass the -v parameter, go test -v.

go test -v

In some cases, we’d like to run just a single test with -run switch

go test -run TestSimple -v

Benchmarking and Profiling

One of the advantages of using GO is that it’s fast.When you write code, you sometimes want to benchmark it and make sure that you’re not making it slower when adding features or fixing bugs. Let’s benchmark the sqrt function. We create a benchmark in a test file ending with _test.go. And we call it Benchmark and then the name of the benchmark. It gets one parameter, which is a pointer to testing.B. So what we do in the benchmark is we iterate b.N times calling our function.

package sqrt

import (
    "testing"
)

func almostEqual(v1, v2 float64) bool {
    return Abs(v1-v2) <= 0.001

}

func TestSimple(t *testing.T) {
    val, err := Sqrt(2)
    if err != nil {
        t.Fatalf("error in calculation - %s", err)
    }
    if !almostEqual(val, 1.414214) {
        t.Fatalf("bad value - %f", val)
    }
}

func BenchmarkSqrt(b *testing.B) {
    for i := 0; i < b.N; i++ {
        _, err := Sqrt(float64(i))
        if err != nil {
            b.Fatal(err)
        }
    }
}

Run benchmarks (dot means all the benchmarks):

go test -v -bench .

If you’re interested only in the benchmarks and not in the tests, you can specify -run with a name that does not match any other test. This way you’ll get only the benchmarks running and not the tests themselves.

go test -v -bench . -rn TTT

Before optimizing a program, you need to profile it to see where it spends its time. You can use your benchmark for profiling. 

go test -v -bench . -run TTT -cpuprofile=prof.out

And now that we have prof.out, we can use the pprof too.

go tool pprof prof.out

To see a function, we use list:

(pprof) list Sqrt

We can see every line and how much time it took flat and commutative.

Networking

JSON

Json can also easily be passed from the browser making it the preferred encoding is rest APIs. Go comes with a built in encoding json library. We can either encode the code to IO reader, IO writer or work with byte slices. 

// JSON example
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "log"
    "os"
)

var data = `
{
  "user": "Scrooge McDuck",
  "type": "deposit",
  "amount": 1000000.3
}
`

// Request is a bank transactions
type Request struct {
    Login  string  `json:"user"`
    Type   string  `json:"type"`
    Amount float64 `json:"amount"`
}

func main() {
    rdr := bytes.NewBufferString(data) // Simulate a file/socket

    // Decode request
    dec := json.NewDecoder(rdr)
    req := &Request{}
    if err := dec.Decode(req); err != nil {
        log.Fatalf("error: can't decode - %s", err)
    }

    fmt.Printf("got: %+v\n", req)

    // Create response
    prevBalance := 8500000.0 // Loaded from database
    resp := map[string]interface{}{
        "ok":      true,
        "balance": prevBalance + req.Amount,
    }

    // Encode respose
    enc := json.NewEncoder(os.Stdout)
    if err := enc.Encode(resp); err != nil {
        log.Fatalf("error: can't encode - %s", err)
    }
}

HTTP calls

A common way to communicate between services is by using HTTP and JSON, also known as REST API.  You’ll find most of what you need in the net/http and then called in JSON packages. 

// Making HTTP calls
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io"
    "log"
    "net/http"
    "os"
)

// Job is a job description
type Job struct {
    User   string `json:"user"`
    Action string `json:"action"`
    Count  int    `json:"count"`
}

func main() {
    // GET request
    resp, err := http.Get("https://httpbin.org/get")
    if err != nil {
        log.Fatalf("error: can't call httpbin.org")
    }

    defer resp.Body.Close()

    io.Copy(os.Stdout, resp.Body)

    fmt.Println("----")

    // POST request
    job := &Job{
        User:   "Saitama",
        Action: "punch",
        Count:  1,
    }

    var buf bytes.Buffer

    enc := json.NewEncoder(&buf)

    if err := enc.Encode(job); err != nil {
        log.Fatalf("error: can't encode job - %s", err)
    }

    resp, err = http.Post("https://httpbin.org/post", "application/json", &buf)

    if err != nil {
        log.Fatalf("error: can't call httpbin.org")
    }

    defer resp.Body.Close()

    io.Copy(os.Stdout, resp.Body)
}
// Calling GitHub API
package main

import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
)

// User is a github user information

type User struct {
    Name        string `json:"name"`
    PublicRepos int    `json:"public_repos"`
}

// userInfo return information on github user
func userInfo(login string) (*User, error) {
    // HTTP call
    url := fmt.Sprintf("https://api.github.com/users/%s", login)
    resp, err := http.Get(url)
    if err != nil {
        return nil, err
    }

    defer resp.Body.Close()

    // Decode JSON
    user := &User{}
    dec := json.NewDecoder(resp.Body)
    if err := dec.Decode(user); err != nil {
        return nil, err
    }
    return user, nil
}

func main() {
    user, err := userInfo("tebeka")
    if err != nil {
        log.Fatalf("error: %s", err)
    }

    fmt.Printf("%+v\n", user)
}

Working with Files and the Web

Writing  to a Text File

package main

import (
    "fmt"
    "io"
    "io/ioutil"
    "os"
)

func main() {
    
    content := "Hello from Go!"

    file, err := os.Create("./fromString.txt")
    checkError(err)
    defer file.Close()

    ln, err := io.WriteString(file, content)
    checkError(err)

    fmt.Printf("All done with file of %v characters", ln)

    bytes := []byte(content)
    ioutil.WriteFile("./fromBytes.txt", bytes, 0644)
}

func checkError(err error) {
    if err != nil {
        panic(err)
    }
}

Reading from a Text File

package main

import (
    "fmt"
    "io/ioutil"
)

func main() {
    fileName := "./hello.txt"
    
    content, err := ioutil.ReadFile(fileName)
    checkError(err)

    result := string(content)
    
    fmt.Println("Read from file:", result)  
}

func checkError(err error) {
    if err != nil {
        panic(err)
    }
}

Walking a Directory Tree

package main

import (
    "fmt"
    "os"
    "path/filepath"
)

func main() {
    
    root, _ := filepath.Abs(".")
    fmt.Println("Processing path", root)
    
    err := filepath.Walk(root, processPath)
    if err != nil {
        fmt.Println("error:", err)
    }
}

func processPath(path string, info os.FileInfo, err error) error {
    if err != nil {
        return err
    }
    
    if path != "." {
        if info.IsDir() {
            fmt.Println("Directory:", path)
        } else {
            fmt.Println("File:", path)
        }
    }
    
    return nil
}

Reading a Text file form the Web

package main

import (
    "fmt"
    "net/http"
    "io/ioutil"
)

func main() {
    url := "http://services.explorecalifornia.org/json/tours.php"

    resp, err := http.Get(url)
    if err != nil {
        panic(err)
    }
    
    fmt.Printf("Response type: %T\n", resp)

    defer resp.Body.Close()
    
    bytes, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        panic(err)
    }
    
    content := string(bytes)
    fmt.Print(content) 
}

Creating and Parsing a JSON String

package main

import (
    "fmt"
    "net/http"
    "io/ioutil"
    "encoding/json"
    "strings"
    "math/big"
)

type Tour struct {
    Name, Price string
}

func main() {
    url := "http://services.explorecalifornia.org/json/tours.php"
    content := contentFromServer(url)

    tours := toursFromJson(content)
    // fmt.Println(tours)
    
    for _, tour := range tours {
        price, _, _ := big.ParseFloat(tour.Price, 10, 2, big.ToZero)
        fmt.Printf("%v ($%.2f)\n", tour.Name, price)
    }
}

func checkError(err error) {
    if err != nil {
        panic(err)
    }
}

func contentFromServer(url string) string {
    
    resp, err := http.Get(url)
    checkError(err)
    
    defer resp.Body.Close()
    bytes, err := ioutil.ReadAll(resp.Body)
    checkError(err)

    return string(bytes)
}

func toursFromJson(content string) []Tour {
    tours := make([]Tour, 0, 20)
    
    decoder := json.NewDecoder(strings.NewReader(content))
    _, err := decoder.Token()
    checkError(err)
    
    var tour Tour
    for decoder.More() {
        err := decoder.Decode(&tour)
        checkError(err)
        tours = append(tours, tour)
    }
    
    return tours
}

Creating a Simple HTTP Server

package main

import (
    "fmt"
    "net/http"
)

type Hello struct{}

func (h Hello) ServeHTTP(w http.ResponseWriter, r *http.Request) {
    fmt.Fprint(w, "<h1>Hello from the Go web server!</h1>")

}

func main() {
    var h Hello
    err := http.ListenAndServe("localhost:4000", h)
    checkError(err)
}

func checkError(err error) {
    if err != nil {
        panic(err)
    }
}