What is Apache Minio

Setting up a MinIO server

In this tutorial I will explain how to set up a MinIO server to take advantage of the storage architecture. Like anyone who doesn't already know what MinIO is: It's a high-performance, distributed object storage system. It is software defined, runs on industry standard hardware, and is 100% open source. It is intentionally built to serve objects as a single-layer architecture to achieve all of the required functionality without compromise. The result is viewed as a cloud-native object server that is scalable and lightweight at the same time.

As the world of cloud engineering has matured, the question arises, why do we need MinIO at all?

Keep in mind that if you deploy your solution in the cloud, you may end up using solution stores like AWS S3, Azure Blob Storage, and Alibaba OSS. The same goes for the concept if your solution remains on-premise, as Minio serves as an alternative to the storage architecture like the provided cloud storage service.

1. How does it work?

In a simple concept, Minio consists of 2 parts - the client part and the server part. This concept also includes a dashboard via web ui or file browser. Both the client and server parts are relatively easy to set up, and if you are familiar with the CLI (Command Line Interface) it will be easy for you to understand.

However, when we design it at the production level, everything has to be distributed, which means the solution offered has to ensure good large-scale performance, self-growth, and high availability. With this in mind, minio has its own concept called Distributed Erasure Code.

This concept is a reliable approach to spreading data across multiple drives and retrieving them even when some of the drives are unavailable. By using this concept, you can lose half of the drives and your data is still guaranteed.

In this tutorial I will show you how to install and configure MinIO servers as distributed erasure code. After that, let's take a quick look at the client side to see how the MinIO service can be used as an end user.

2. Installation phase

For the installation phase I will configure 2 servers as minio clusters in order to prepare the configuration of the distributed erasure code.

Now we are going to list 4 disk drives that we will use for partitioning as block devices for Minio usage. Since our architecture chose to have multiple servers, the minimum drive requirement for a server is 2, but if you are using a single server the minimum drive requirement is 1. Please refer to the detailed requirements needed to design the erasure code You here .

Below are the steps:

[[email protected] ~] # fdisk -l

Disk / dev / sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000a4fd8

Device Boot Start End Blocks Id System
/ dev / sda1 * 2048 2099199 1048576 83 Linux
/ dev / sda2 2099200 209715199 103808000 8e Linux LVM

Disk / dev / sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

Disk / dev / sdc: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

Disk / dev / sdd: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

Disk / dev / sde: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

Disk / dev / mapper / centos-root: 104.1 GB, 104144568320 bytes, 203407360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

Disk / dev / mapper / centos-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes

As you can see above, on our side there are 4 drives with a size of 8 GB each built into our server.

Next we create a partition of each drive, then we create a dedicated directory that will be mounted on each partition to be created. Below are the steps.

[[email protected] ~] # fdisk / dev / sdb Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x4217c4d9.

Command (m for help): p

Disk / dev / sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4217c4d9

Device Boot Start End Blocks Id System

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-16777215, default 2048):
Using default value 2048
Last sector, + sectors or + size {K, M, G} (2048-16777215, default 16777215):
Using default value 16777215
Partition 1 of type Linux and of size 8 GiB is set

Command (m for help): p

Disk / dev / sdb: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical / physical): 512 bytes / 512 bytes
I / O size (minimum / optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x4217c4d9

Device Boot Start End Blocks Id System
/ dev / sdb1 2048 16777215 8387584 83 Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl () to re-read partition table.
Syncing disks.

Command (m for help): q

[[email protected] ~] # ls / dev / sdb *
/ dev / sdb / dev / sdb1
[[email protected] ~] # mkfs.xfs -f / dev / sdb1
meta-data = / dev / sdb1 isize = 512 agcount = 4, agsize = 524224 blks
= sectsz = 512 attr = 2, projid32bit = 1
= crc = 1 finobt = 0, sparse = 0
data = bsize = 4096 blocks = 2096896, imaxpct = 25
= sunit = 0 swidth = 0 blks
naming = version 2 bsize = 4096 ascii-ci = 0 ftype = 1
log = internal log bsize = 4096 blocks = 2560, version = 2
= sectsz = 512 sunit = 0 blks, lazy-count = 1
realtime = none extsz = 4096 blocks = 0, rtextents = 0
[[email protected] ~] #
[[email protected] ~] # mkdir -p / opt / drive1
[[email protected] ~] # mkdir -p / opt / drive2
[[email protected] ~] # mkdir -p / opt / drive3
[[email protected] ~] # mkdir -p / opt / drive4
[[email protected] ~] #
[[email protected] ~] # mount / dev / sdb1 / opt / drive1
[[email protected] ~] # df -h
Filesystem Size Used Avail Use% Mounted on
/ dev / mapper / centos-root 97G 3.8G 94G 4% /
devtmpfs 1.9G 0 1.9G 0% / dev
tmpfs 1.9G 0 1.9G 0% / dev / shm
tmpfs 1.9G 8.6M 1.9G 1% / run
tmpfs 1.9G 0 1.9G 0% / sys / fs / cgroup
/ dev / sda1 1014M 145M 870M 15% / boot
tmpfs 379M 0 379M 0% / run / user / 0
/ dev / sdb1 8.0G 33M 8.0G 1% / opt / drive1
[[email protected] ~] #

Repeat the same process to create a partition on the remaining drives, then mount that into each directory we created. As the end result, you should finally see the output like below: -

[[email protected] ~] # mount / dev / sdb1 / opt / drive1 [[email protected] ~] # mount / dev / sdc1 / opt / drive2 [[email protected] ~] # mount / dev / sdd1 / opt / drive3 [[email protected] ~] # mount / dev / sde1 / opt / drive4 [[email protected] ~] # [[email protected] ~] # [[email protected] ~] # df -h Filesystem Size Used Avail Use % Mounted on / dev / mapper / centos-root 97G 3.8G 94G 4% / devtmpfs 1.9G 0 1.9G 0% / dev tmpfs 1.9G 0 1.9G 0% / dev / shm tmpfs 1.9G 8.6M 1.9G 1% / run tmpfs 1.9G 0 1.9G 0% / sys / fs / cgroup / dev / sda1 1014M 145M 870M 15% / boot tmpfs 379M 0 379M 0% / run / user / 0 / dev / sdb1 8.0G 33M 8.0G 1% / opt / drive1 / dev / sdc1 8.0G 33M 8.0G 1% / opt / drive2 / dev / sdd1 8.0G 33M 8.0G 1% / opt / drive3 / dev / sde1 8.0G 33M 8.0G 1% / opt / drive4

Ok, since a requirement is met on the drives for Server 1, repeat the same configuration on Server 2 as above.

3rd phase of configuration

Now that both server configurations are complete, let's move on to installing the Minio service. First, download the Minio package as shown below:

[[email protected] ~] # wget https://dl.min.io/server/minio/release/linux-amd64/minio && chmod + x minio --2019-09-29 22: 23: 57-- https : //dl.min.io/server/minio/release/linux-amd64/minio Resolving dl.min.io (dl.min.io) ... 178.128.69.202 Connecting to dl.min.io (dl.min .io) | 178.128.69.202 |: 443 ... connected. HTTP request sent, awaiting response ... 200 OK Length: 43831296 (42M) [application / octet-stream] Saving to: "minio"

3% [=>] 1,335,296 106KB / s eta 6m 33s

Now repeat the same thing as above on server 2.

When everything is done, let's get started with the Minio configuration. We will MINIO_ACCESS_KEY and MINIO_SECRET_KEY define as authentication access. The configuration is as given below: -

[[email protected] ~] # export MINIO_ACCESS_KEY = shahril && export MINIO_SECRET_KEY = shahril123 [[email protected] ~] # ./minio server http: //10.124.12. {141..142}: 9000 / opt / drive { 1..4} Waiting for a minimum of 4 disks to come online (elapsed 0s)

Waiting for a minimum of 4 disks to come online (elapsed 2s)

Waiting for a minimum of 4 disks to come online (elapsed 3s)

Waiting for a minimum of 4 disks to come online (elapsed 3s)

Waiting for all other servers to be online to format the disks.
Status: 8 online, 0 offline.
Endpoint: http://10.124.12.141:9000 http://10.124.12.142:9000
AccessKey: shahril
SecretKey: shahril123

Browser Access:
http://10.124.12.141:9000 http://10.124.12.142:9000

Command-line access: https://docs.min.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://10.124.12.141:9000 shahril shahril123

Object API (Amazon S3 compatible):
Go: https://docs.min.io/docs/golang-client-quickstart-guide
Java: https://docs.min.io/docs/java-client-quickstart-guide
Python: https://docs.min.io/docs/python-client-quickstart-guide
JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
.NET: https://docs.min.io/docs/dotnet-client-quickstart-guide

Now the configuration on server 1 is complete, repeat the same step on server 2 for the configuration.

Once everything is done, we can start testing

4. The test phase

When all is done, let's start seeing mini-service usability. As shown in the configuration above, we can access the dashboard of its user interface through a browser. For our example, let's log in to http://10.124.12.141:9000 with the access key shahril and the secret key shahril123 as configured.

The result is shown as below:

When we're done we'll be directed to the bucket dashboard. Now let's create our first bucket.

Click on the icon folder with the plus button and name our first bucket my love. Example as shown below:

Once that's done you will notice that a new area is created and displayed on the left side (see screenshot below).

When you are done, you will notice that a new area is created and displayed on the left side (see screenshot below).

Next we will add all the files from your local site to be included in the area

You will find that the new file has been successfully uploaded to the bucket as shown below.

To ensure that the concept of distribution is well implemented. Let's do a simple test by accessing the minio dashboard through another server. The other server url is http://10.124.12.142:9000.

As expected, the scope and files we included are also present in the url of other servers as shown above.

Now let's do another test. This time we are going to use a different workstation that is through the client console mc accesses our Minio server.

From the client side we will create a file and then upload it to the existing area.

As the end result, we then expect to see from the dashboard that the new file uploaded from the client side automatically exists.

First, open the client workstation and download the Minio client package. An example is shown below:

[[email protected] ~] # wget https://dl.min.io/client/mc/release/linux-amd64/mc --2019-09-30 11: 47: 38-- https: // dl. min.io/client/mc/release/linux-amd64/mc Resolving dl.min.io (dl.min.io) ... 178.128.69.202 Connecting to dl.min.io (dl.min.io) | 178.128 .69.202 |: 443 ... connected. HTTP request sent, awaiting response ... 200 OK Length: 16592896 (16M) [application / octet-stream] Saving to: "mc"

100% [========================================================================== ================================>] 16,592,896 741KB / s in 1m 59s

2019-09-30 11:49:37 (137 KB / s) - ‘mc’ saved [16592896/16592896]

[[email protected] ~] # chmod + x mc

Then carry out the configuration from the client side in order to access the dedicated area with create access key and secret. Example as described below:

[[email protected] ~] # ./mc config host add myminio http://10.124.12.142:9000 shahril shahril123 mc: Configuration written to `/ root / .mc / config.json`. Please update your access credentials. mc: Successfully created `/ root / .mc / share`. mc: Initialized share uploads `/ root / .mc / share / uploads.json` file. mc: Initialized share downloads `/ root / .mc / share / downloads.json` file. Added `myminio` successfully.

Once the configuration is complete you should be able to see the content within the existing area. Example as shown below:

[[email protected] ~] # ./mc ls myminio [2019-09-30 11:16:25 +08] 0B mylove /

[[email protected] ~] # ./mc ls myminio / mylove /
[2019-09-30 11:16:25 +08] 55KiB myself.jpg

Now create or load any existing file into the bucket from the client side. Example as below: -

[[email protected] ~] # ./mc cp new_file.txt myminio / mylove new_file.txt: 38 B / 38 B ??????????????????????? ????????????????????????????????????????? 100.00% 1.02 KiB / s 0s [[email protected] ~] #

[[email protected] ~] # ./mc ls myminio / mylove /
[2019-09-30 11:16:25 +08] 55KiB myself.jpg
[2019-09-30 11:58:16 +08] 38B new_file.txt

Once that's done, as expected, if you refresh from the dashboard page using one of the server urls, you should see the new files appear there as described below.

You should see the full link of the image when you click the share icon on the right as below. This is the unique link of each object within the area that you can use on the application page via Curl or API.

Thumbs up! Now we've successfully set up and configured a self-hosted on-site storage service using Minio. For more details, see the documentation here