Using Apache? Visit the companion article here!
OVERVIEW
Akeneo PIM is a PHP/Symfony web application that uses MySQL for persistence, and Elasticsearch for search capability. From an abstract perspective it consists of three major components:
- · MySQL, a relational database for storing data
- · Elasticsearch, a search engine for indexing
- · PHP/Symfony backend served by Apache2
Accordingly, you may find yourself in the position of hosting Elasticsearch on a different machine from PHP/Symfony+Apache2; either a host you maintain for Elasticsearch, or a Elasticsearch service in the cloud. Once you move the search engine portion of the application to external host, you’ll need to secure it with SSL.
In a typical Akeneo Enterprise installation, all three components of the application are installed on the same machine. Elasticsearch in this setting does not use authentication, nor does it use encryption over http. Why would it, it’s on the same machine? But when Elasticsearch is installed on another machine, you must enable authentication and encryption.
Elasticsearch authentication, in this article, will be configured as basic authentication, that is, using a username and password. Elasticsearch encryption, since we are using the Akeneo Enterprise Edition, will be done using XPack, an optional Java package available for Elasticsearch with a licensing fee.
On our new Elasticsearch host (Ubuntu 20 LTS server), we’ll start by installing Elasticsearch. Next, we’ll configure Elasticsearch so it accessible to the external network. Then configure it for SSL, and finally set up basic authentication.
On our Akeneo PIM host (Ubuntu 20 LTS server), we’ll patch Akeneo, if required. Add the CA Bundle. Configure Akeneo for SSL, and finally rebuild our indexes on the new external Elasticsearch host.
So, follow along as I explain each step of the process of requiring and verifying SSL.
ON THE ELASTICSEARCH HOST
Install Elasticsearch
I’m going to start this process with the assumption that you have a new Ubuntu 20 LTS Server that you are going to install Elasticsearch on. In my case, I’m going to use a Raspberry Pi 4, so the hostnames will reflect this decision.
~$ # Rather than type sudo over and over, I like to become the root user by doing: ~$ sudo -u root -i
Now, the rest of the commands I execute will be as the root user, thus prefixed with #, until I exit.
I’m going to install Elasticsearch by following the Elasticsearch portion of Akeneo’s System installation on Ubuntu 18.04 (Bionic Beaver).
~# # Let's start by installing apt-transport-https: ~# apt-get install apt-transport-https -y ~# # Next, add the elasticsearch gpg-key to apt: ~# wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - OK ~# # Now, add the elasticsearch repository to apt: ~# echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list deb https://artifacts.elastic.co/packages/7.x/apt stable main ~# # With the additional configuration in place, let's update apt: ~# apt update Get:1 https://artifacts.elastic.co/packages/7.x/apt stable InRelease [10.4 kB] Hit:2 http://ports.ubuntu.com/ubuntu-ports focal InRelease Hit:3 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease Get:5 https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages [25.8 kB] Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease Fetched 36.2 kB in 2s (17.9 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. ~# # The instructions say to use Elasticsearch 7.5. Let's see if that is available: ~# apt-cache madison elasticsearch elasticsearch | 7.11.1 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.11.0 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.10.2 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.10.1 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.10.0 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.9.3 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.9.2 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.9.1 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.9.0 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.8.1 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.8.0 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.7.1 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages elasticsearch | 7.7.0 | https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 Packages ~# # Hmm. It's not available. ~# # I've used version 7.8.1 with Akeneo 4 successfully before, so I'll I use it here. ~# apt-get install elasticsearch=7.8.1 Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: elasticsearch 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 315 MB of archives. After this operation, 528 MB of additional disk space will be used. Get:1 https://artifacts.elastic.co/packages/7.x/apt stable/main arm64 elasticsearch arm64 7.8.1 [315 MB] Fetched 315 MB in 30s (10.5 MB/s) Selecting previously unselected package elasticsearch. (Reading database ... 66801 files and directories currently installed.) Preparing to unpack .../elasticsearch_7.8.1_arm64.deb ... Unpacking elasticsearch (7.8.1) ... Setting up elasticsearch (7.8.1) ... Created elasticsearch keystore in /etc/elasticsearch/elasticsearch.keystore Processing triggers for systemd (245.4-4ubuntu3.4) ... ~# # Now that it's installed, let's start elasticsearch: ~# service elasticsearch start ~# # Let's verify vm.max_map_count ~# sysctl -n vm.max_map_count 262144 ~# # GOOD! ~# # Let's make sure it's up and running with its default configuration: ~# curl http://localhost:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
Now that Elasticsearch is installed and running, let’s configure it to start automatically on the host’ startup or reboot.
Enable Startup on Boot
~# # First, let's create a systemd configuration directory for Elasticsearch: ~# mkdir -p /etc/systemd/system/elasticsearch.service.d ~# # Next, we'll add a configuration file: ~# echo -e "[Service]\nTimeoutStartSec=60" | sudo tee /etc/systemd/system/elasticsearch.service.d/startup-timeout.conf [Service] TimeoutStartSec=60 ~# # Now, let's reload the daemon ~# systemctl daemon-reload ~# # And finally, enable Elasticsearch ~# systemctl enable elasticsearch Synchronizing state of elasticsearch.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable elasticsearch Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /lib/systemd/system/elasticsearch.service. ~# systemctl daemon-reload
At this point, any time you startup or reboot the host, Elasticsearch will automatically start too. By default, Elasticsearch is only configured to be accessible on localhost (127.0.0.1). So, let’s configure it to be accessible to any external network.
Enable External Network Access
~# # Let's find where the Elasticsearch configuration file is located on this machine: ~# find /etc -name elasticsearch.yml /etc/elasticsearch/elasticsearch.yml ~# # Let's modify /etc/elasticsearch/elasticsearch.yml, adding: network.bind_host: 0 ~# # and: discovery.type: single-node, so it can be connected to outside localhost. ~# vim /etc/elasticsearch/elasticsearch.yml
# Allow external access by any host
network.bind_host: 0
# Set the discovery type as a single node
discovery.type: single-node
~# tail /etc/elasticsearch/elasticsearch.yml # Allow external access by any host network.bind_host: 0 # Set the discovery type as a single node discovery.type: single-node ~# # Let's restart Elasticsearch to pick up the configuration changes: ~# service elasticsearch restart ~# # Verify it works locally: ~# curl http://localhost:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
And, from another machine:
$ curl http://rpi4-4g-elasticsearch:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
Yes. Now it is accessible. Next, we need to generate a certificate-authority (CA) certificate and configure Elasticsearch to enable SSL using it.
Enable SSL
~# # Let's start by creating a self-signed certificate: ~# # Is certutil in the path? ~# which elasticsearch-certutil ~# # not found, so let's find it: ~# find / -name elasticsearch-certutil /usr/share/elasticsearch/bin/elasticsearch-certutil ~# # Change to the Elasticsearch certificates directory ~# cd /usr/share/elasticsearch /usr/share/elasticsearch# # Let's generate a self-signed certificate authority (CA) certificate /usr/share/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-certutil ca This tool assists you in the generation of X.509 certificates and certificate signing requests for use with SSL/TLS in the Elastic stack. The 'ca' mode generates a new 'certificate authority' This will create a new X.509 certificate and private key that can be used to sign certificate when running in 'cert' mode. Use the 'ca-dn' option if you wish to configure the 'distinguished name' of the certificate authority By default the 'ca' mode produces a single PKCS#12 output file which holds: * The CA certificate * The CA's private key If you elect to generate PEM format certificates (the -pem option), then the output will be a zip file containing individual files for the CA certificate and private key Please enter the desired output file [elastic-stack-ca.p12]: Enter password for elastic-stack-ca.p12 : I left it blank /usr/share/elasticsearch# # What is this hosts fully qualified domain name (fqdn)? /usr/share/elasticsearch# hostname --fqdn rpi4-4g-elasticsearch.donaldbales.com /usr/share/elasticsearch# # Now let's generate an http cert /usr/share/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-certutil http ## Elasticsearch HTTP Certificate Utility The 'http' command guides you through the process of generating certificates for use on the HTTP (Rest) interface for Elasticsearch. This tool will ask you a number of questions in order to generate the right set of files for your needs. ## Do you wish to generate a Certificate Signing Request (CSR)? A CSR is used when you want your certificate to be created by an existing Certificate Authority (CA) that you do not control (that is, you don't have access to the keys for that CA). If you are in a corporate environment with a central security team, then you may have an existing Corporate CA that can generate your certificate for you. Infrastructure within your organisation may already be configured to trust this CA, so it may be easier for clients to connect to Elasticsearch if you use a CSR and send that request to the team that controls your CA. If you choose not to generate a CSR, this tool will generate a new certificate for you. That certificate will be signed by a CA under your control. This is a quick and easy way to secure your cluster with TLS, but you will need to configure all your clients to trust that custom CA. Generate a CSR? [y/N]N ## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate? If you have an existing CA certificate and key, then you can use that CA to sign your new http certificate. This allows you to use the same CA across multiple Elasticsearch clusters which can make it easier to configure clients, and may be easier for you to manage. If you do not have an existing CA, one will be generated for you. Use an existing CA? [y/N]Y ## What is the path to your CA? Please enter the full pathname to the Certificate Authority that you wish to use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS (.jks) or PEM (.crt, .key, .pem) format. CA Path: /usr/share/elasticsearch/elastic-stack-ca.p12 Reading a PKCS12 keystore requires a password. It is possible for the keystore's password to be blank, in which case you can simply press <ENTER> at the prompt Password for elastic-stack-ca.p12: ## How long should your certificates be valid? Every certificate has an expiry date. When the expiry date is reached clients will stop trusting your certificate and TLS connections will fail. Best practice suggests that you should either: (a) set this to a short duration (90 - 120 days) and have automatic processes to generate a new certificate before the old one expires, or (b) set it to a longer duration (3 - 5 years) and then perform a manual update a few months before it expires. You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D) For how long should your certificate be valid? [5y] 10y ## Do you wish to generate one certificate per node? If you have multiple nodes in your cluster, then you may choose to generate a separate certificate for each of these nodes. Each certificate will have its own private key, and will be issued for a specific hostname or IP address. Alternatively, you may wish to generate a single certificate that is valid across all the hostnames or addresses in your cluster. If all of your nodes will be accessed through a single domain (e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it simpler to generate one certificate with a wildcard hostname (*.es.example.com) and use that across all of your nodes. However, if you do not have a common domain name, and you expect to add additional nodes to your cluster in the future, then you should generate a certificate per node so that you can more easily generate new certificates when you provision new nodes. Generate a certificate per node? [y/N]Y ## What is the name of node #1? This name will be used as part of the certificate file name, and as a descriptive name within the certificate. You can use any descriptive name that you like, but we recommend using the name of the Elasticsearch node. node #1 name: rpi4-4g-elasticsearch.donaldbales.com ## Which hostnames will be used to connect to rpi4-4g-elasticsearch.donaldbales.com? These hostnames will be added as "DNS" names in the "Subject Alternative Name" (SAN) field in your certificate. You should list every hostname and variant that people will use to connect to your cluster over http. Do not list IP addresses here, you will be asked to enter them later. If you wish to use a wildcard certificate (for example *.es.example.com) you can enter that here. Enter all the hostnames that you need, one per line. When you are done, press <ENTER> once more to move on to the next step. rpi4-4g-elasticsearch.donaldbales.com You entered the following hostnames. - rpi4-4g-elasticsearch.donaldbales.com Is this correct [Y/n]Y ## Which IP addresses will be used to connect to rpi4-4g-elasticsearch.donaldbales.com? If your clients will ever connect to your nodes by numeric IP address, then you can list these as valid IP "Subject Alternative Name" (SAN) fields in your certificate. If you do not have fixed IP addresses, or not wish to support direct IP access to your cluster then you can just press <ENTER> to skip this step. Enter all the IP addresses that you need, one per line. When you are done, press <ENTER> once more to move on to the next step. 192.168.0.4 You entered the following IP addresses. - 192.168.0.4 Is this correct [Y/n]Y ## Other certificate options The generated certificate will have the following additional configuration values. These values have been selected based on a combination of the information you have provided above and secure defaults. You should not need to change these values unless you have specific requirements. Key Name: rpi4-4g-elasticsearch.donaldbales.com Subject DN: CN=rpi4-4g-elasticsearch.donaldbales.com Key Size: 2048 Do you wish to change any of these options? [y/N]N Generate additional certificates? [Y/n]n ## What password do you want for your private key(s)? Your private key(s) will be stored in a PKCS#12 keystore file named "http.p12". This type of keystore is always password protected, but it is possible to use a blank password. If you wish to use a blank password, simply press <ENTER> at the prompt below. Provide a password for the "http.p12" file: [<ENTER> for none] ## Where should we save the generated files? A number of files will be generated including your private key(s), public certificate(s), and sample configuration options for Elastic Stack products. These files will be included in a single zip archive. What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip] Zip file written to /usr/share/elasticsearch/elasticsearch-ssl-http.zip /usr/share/elasticsearch# # Let's verify: /usr/share/elasticsearch# ls -lap total 580 drwxr-xr-x 7 root root 4096 Feb 23 23:46 ./ drwxr-xr-x 109 root root 4096 Feb 23 22:53 ../ -rw-rw-r-- 1 root root 544318 Jul 21 2020 NOTICE.txt -rw-r--r-- 1 root root 8165 Jul 21 2020 README.asciidoc drwxr-xr-x 2 root root 4096 Feb 23 22:53 bin/ -rw------- 1 root root 2527 Feb 23 23:34 elastic-stack-ca.p12 -rw------- 1 root root 7309 Feb 23 23:46 elasticsearch-ssl-http.zip drwxr-xr-x 9 root root 4096 Feb 23 22:53 jdk/ drwxr-xr-x 3 root root 4096 Feb 23 22:53 lib/ drwxr-xr-x 48 root root 4096 Feb 23 22:53 modules/ drwxr-xr-x 2 root root 4096 Jul 21 2020 plugins/ /usr/share/elasticsearch# unzip elasticsearch-ssl-http.zip Archive: elasticsearch-ssl-http.zip creating: elasticsearch/ inflating: elasticsearch/README.txt inflating: elasticsearch/http.p12 inflating: elasticsearch/sample-elasticsearch.yml creating: kibana/ inflating: kibana/README.txt inflating: kibana/elasticsearch-ca.pem inflating: kibana/sample-kibana.yml /usr/share/elasticsearch# # Let's change to the unzipped directory, and see what we've got: /usr/share/elasticsearch# cd elasticsearch/ /usr/share/elasticsearch/elasticsearch# ls -lap total 20 drwxr-xr-x 2 root root 4096 Feb 23 23:46 ./ drwxr-xr-x 9 root root 4096 Feb 23 23:49 ../ -rw-r--r-- 1 root root 1091 Feb 23 23:46 README.txt -rw-r--r-- 1 root root 3483 Feb 23 23:46 http.p12 -rw-r--r-- 1 root root 652 Feb 23 23:46 sample-elasticsearch.yml /usr/share/elasticsearch/elasticsearch# # Let's copy the certificate store to the config directory: /usr/share/elasticsearch/elasticsearch# cp http.p12 /etc/elasticsearch/ /usr/share/elasticsearch/elasticsearch# # Next, let's add the sample configuration to our elasticsearch.yml file: /usr/share/elasticsearch# cat sample-elasticsearch.yml # # SAMPLE ELASTICSEARCH CONFIGURATION FOR ENABLING SSL ON THE HTTP INTERFACE # # This is a sample configuration snippet for Elasticsearch that enables and configures SSL for the HTTP (Rest) interface # # This was automatically generated at: 2021-02-23 23:46:30Z # This configuration was intended for Elasticsearch version 7.8.1 # # You should review these settings, and then update the main configuration file at # /etc/elasticsearch/elasticsearch.yml # # This turns on SSL for the HTTP (Rest) interface xpack.security.http.ssl.enabled: true # This configures the keystore to use for SSL on HTTP xpack.security.http.ssl.keystore.path: "http.p12" /usr/share/elasticsearch/elasticsearch# vim /etc/elasticsearch/elasticsearch.yml
# This turns on SSL for the HTTP (Rest) interface
xpack.security.http.ssl.enabled: true
# This configures the keystore to use for SSL on HTTP
xpack.security.http.ssl.keystore.path: “http.p12”
# This tells xpack to ignore the differences in the SSL CA Certificate, so
# we can run elasticsearch-setup-passwords interactive after setting up ssl
xpack.security.http.ssl.verification_mode: certificate
/usr/share/elasticsearch/elasticsearch# tail -n 24 /etc/elasticsearch/elasticsearch.yml # Allow external access by any host network.bind_host: 0 # Set the discovery type as a single node discovery.type: single-node # This turns on SSL for the HTTP (Rest) interface xpack.security.http.ssl.enabled: true # This configures the keystore to use for SSL on HTTP xpack.security.http.ssl.keystore.path: "http.p12" # This tells xpack to ignore the differences in the SSL CA Certificate, so # we can run elasticsearch-setup-passwords interactive after setting up ssl xpack.security.http.ssl.verification_mode: certificate /usr/share/elasticsearch/elasticsearch# # And now we can restart elasticsearch /usr/share/elasticsearch/elasticsearch# service elasticsearch restart /usr/share/elasticsearch/elasticsearch# # Let's change back to our Elasticsearch configuration directory: /usr/share/elasticsearch/elasticsearch# cd /etc/elasticsearch/ /etc/elasticsearch# # And verify that Elasticsearch is up and running with SSL (https): /etc/elasticsearch# curl -k https://localhost:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } /etc/elasticsearch# # Yeah! SSL is working! /etc/elasticsearch# # Next, let's extract the certificates as a pems so we can use it on the pim host /etc/elasticsearch# openssl pkcs12 -in http.p12 -out rpi4-4g-elasticsearch.donaldbales.com.crt.pem -clcerts -nokeys Enter Import Password: /etc/elasticsearch# # and the key... /etc/elasticsearch# openssl pkcs12 -in http.p12 -out rpi4-4g-elasticsearch.donaldbales.com.key.pem -nocerts -nodes Enter Import Password: /etc/elasticsearch# # and the ca cert... /etc/elasticsearch# openssl pkcs12 -in http.p12 -out rpi4-4g-elasticsearch.donaldbales.com.ca.pem -nokeys Enter Import Password: /etc/elasticsearch# # Verify it looks good: /etc/elasticsearch# cat rpi4-4g-elasticsearch.donaldbales.com.ca.pem Bag Attributes friendlyName: http localKeyID: 54 69 6D 65 20 31 36 31 34 31 32 33 39 39 30 36 31 37 subject=CN = rpi4-4g-elasticsearch.donaldbales.com issuer=CN = Elastic Certificate Tool Autogenerated CA -----BEGIN CERTIFICATE----- MIIDWDCCAkCgAwIBAgIVAPA0Qova66R3Jru1u6j1cgeODQ5NMA0GCSqGSIb3DQEB CwUAMDQxMjAwBgNVBAMTKUVsYXN0aWMgQ2VydGlmaWNhdGUgVG9vbCBBdXRvZ2Vu ZXJhdGVkIENBMB4XDTIxMDIyMzIzNDYzMFoXDTMxMDIyMzIzNDYzMFowIDEeMBwG A1UEAxMVcnBpNC00Zy1lbGFzdGljc2VhcmNoMIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEAoPur93dPYd83nHnW2RNkm3gQV0ufBMV4zCvnB9xnPa7tF0qc yyVwZEnwPYcO1wkJUi3jhE8Uk2eT3t5BCntXE8mtPiCFFlsjQG1EI0Bh1Pb1+4Kl 5gS6ePXgq6m90dixEM/Q14z/AzlEzs3daIczcmiejKcK7VYGmua2o7jhl+9Hmi1X V37oKY1c6xLBIIVC5NbmLtejS5s0aOUx41l0MzkQeB1wkmMOg55AS7bUkqS5wHNu lmT4joeLDt8nucxw1jRiqbVsBRhIrXhCjq+RQH0a6mP/VOuhZ869sF4wEOPI5OVm cztSaVmTjJ9DVy9L/YXWhI0EoVTb+xH+gxM2bQIDAQABo3UwczAdBgNVHQ4EFgQU NXyFRgt9BdOWJVp9vGUTLHmrGL8wHwYDVR0jBBgwFoAUnbcj91e/WgN1FqDA02oA bVgZ6L0wJgYDVR0RBB8wHYcEwKgAA4IVcnBpNC00Zy1lbGFzdGljc2VhcmNoMAkG A1UdEwQCMAAwDQYJKoZIhvcNAQELBQADggEBAAw1vKtgHsCjsqIEWmF6HOdNk1CL VPLGSsS/O/pFXmWzmd16AR6KXSHjFZWi6XSdmy1rygFWUJtacE0oW0m79jbAry1Z AsXz/pMuy/EzQCmeU1ByLfpkrqK4Av4Lv9K6DEQBGgKlcDznNQLxf9sVNMYcAOjh 8RYfoFJs33gHGbUbiu1wQ3O0N5oqVt+obBKEixJyD8vnsD34VG01OlK3/FW0CNGx 3HWdZvnIA3yweUc2xAGvmfjevH3ry9yMP1pVIEdH7vEMpF78uW3fBQL2QaU+ZjSb TOvTjdkSDFXTzE4PRQncbVgTjjBQgjTZ3K1dn6MRDs0bbqhmlexwiziJFK0= -----END CERTIFICATE----- Bag Attributes friendlyName: ca 2.16.840.1.113894.746875.1.1: <Unsupported tag 6> subject=CN = Elastic Certificate Tool Autogenerated CA issuer=CN = Elastic Certificate Tool Autogenerated CA -----BEGIN CERTIFICATE----- MIIDSTCCAjGgAwIBAgIUPxgyJ6lj/NvLGrvKVN1xp+u5ddEwDQYJKoZIhvcNAQEL BQAwNDEyMDAGA1UEAxMpRWxhc3RpYyBDZXJ0aWZpY2F0ZSBUb29sIEF1dG9nZW5l cmF0ZWQgQ0EwHhcNMjEwMjIzMjMzMzUxWhcNMjQwMjIzMjMzMzUxWjA0MTIwMAYD VQQDEylFbGFzdGljIENlcnRpZmljYXRlIFRvb2wgQXV0b2dlbmVyYXRlZCBDQTCC ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJht3mMyvBAEclaiLn5QvWhz 1g4YFFMTYWPZbYekKN95OBIWiEoDWAVkDEjmg1RLwBwLUapN0qWaez2OsBR7k4dj /+VBTiN3toxG2thwXM1T1FkxYWa7FLO6JdF2O2t7aio8XYmkdlHfZncuD2pN+yyL bQFBt5FWu0M8t9J5Opkd6nsNWM9t3AhF6KKyx8/QLFvORiZWzuuUCBENPJYpVxza uq0MYMI2G4GPHbqN3aQ8XgT5aqFcqQzwFx7khtj7/zUtCBF+YgJnlO6Z0g/7ZxSM KqCvU7purHlE/GorRCI1ArCIfXoe82+7PsDHf9W6EmfbflI9ZvHDgbhtWyYyxI8C AwEAAaNTMFEwHQYDVR0OBBYEFJ23I/dXv1oDdRagwNNqAG1YGei9MB8GA1UdIwQY MBaAFJ23I/dXv1oDdRagwNNqAG1YGei9MA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZI hvcNAQELBQADggEBAFgfObGCBuuBzFb7F3sMmIkF5J3mtYQNUyNyAl4+6orwCM/B YLeamSd4XYxVegc4q9XsWUFqxiYUiMcMzs2ytyAnVV6v9AZjSxhojPX+liEp+iNW yrTRV6FBUFjmWIGeqArseaGm9PeSmS1sLSHFXtIrF94r2bKnwmWNN7/zFjM3d15y YlFyvAZucxnlhxupKEErrAbRxIOcOcwtLhh1qu1cTZLwqyydTxA95b4K1IcF84J9 qSpm43+Z6UNmQOHG6F9Qlmy/F3+z43J8A4EW2UNs+UlaMViQMA0mPchq3mLfSNTL kajxeiOkm0SAPf0I3hiMX8sxcdZfamUS9q03CRg= -----END CERTIFICATE----- /etc/elasticsearch# # The last certificate in the file is the CA cert, which will be copied onto the Akeneo PIM web server. /etc/elasticsearch# # So let's copy it to the login user's directory: /etc/elasticsearch# cp rpi4-4g-elasticsearch.donaldbales.com.ca.pem /home/ubuntu/ /etc/elasticsearch# # And change the owner privileges so it can be scp'd: /etc/elasticsearch# chown ubuntu:ubuntu /home/ubuntu/rpi4-4g-elasticsearch.donaldbales.com.ca.pem -rw------- 1 ubuntu ubuntu 2931 Feb 25 22:50 /home/ubuntu/rpi4-4g-elasticsearch.donaldbales.com.ca.pem
Four out of five tasks done. We have Elasticsearch installed, it starts up at boot, configured for external host access, and SSL. Now we need to add basic authentication.
Enable Basic Authentication
/etc/elasticsearch# # Is elasticsearch-users in the path? /etc/elasticsearch# which elasticsearch-users /etc/elasticsearch# # No. So let's find it: /etc/elasticsearch# find / -name elasticsearch-users /usr/share/elasticsearch/bin/elasticsearch-users /etc/elasticsearch# # Let's use the command to add an Akeneo PIM super user /etc/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-users useradd akeneo_pimee -p akeneo_pimee -r superuser /etc/elasticsearch# # Now let's verify: /etc/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-users list akeneo_pimee : superuser /etc/elasticsearch# # Next, let's add the basic authentication to the elasticsearch.yml file:
# This adds the file authentication realm
xpack.security.authc.realms.file.users.order: 0
# This enables xpack
xpack.security.enabled: true
/etc/elasticsearch# vim /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch# tail /etc/elasticsearch/elasticsearch.yml # Allow external access by any host network.bind_host: 0 # Set the discovery type as a single node discovery.type: single-node # This turns on SSL for the HTTP (Rest) interface xpack.security.http.ssl.enabled: true # This configures the keystore to use for SSL on HTTP xpack.security.http.ssl.keystore.path: "http.p12" # This tells xpack to ignore the differences in the SSL CA Certificate, so # we can run elasticsearch-setup-passwords interactive after setting up ssl xpack.security.http.ssl.verification_mode: certificate # This adds the file authentication realm xpack.security.authc.realms.file.users.order: 0 # This enables xpack xpack.security.enabled: true /etc/elasticsearch# # Now let's restart elasticsearch /etc/elasticsearch# service elasticsearch restart /etc/elasticsearch# # Let's setup passwords: /etc/elasticsearch# /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive --url https://localhost:9200 Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: passwords must be at least [6] characters long Try again. Enter password for [elastic]: Reenter password for [elastic]: Enter password for [apm_system]: Reenter password for [apm_system]: Enter password for [kibana_system]: Reenter password for [kibana_system]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Enter password for [remote_monitoring_user]: Reenter password for [remote_monitoring_user]: Changed password for user [apm_system] Changed password for user [kibana_system] Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [remote_monitoring_user] Changed password for user [elastic] /etc/elasticsearch# # I specified akeneo_pimee for all passwords, you should do something else! /etc/elasticsearch# # And restart again... /etc/elasticsearch# service elasticsearch restart /etc/elasticsearch# # Now let's test that it requires a username and password: /etc/elasticsearch# # First, without a username and password: /etc/elasticsearch# curl -k https://localhost:9200 {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Bearer realm=\"security\"","ApiKey","Basic realm=\"security\" charset=\"UTF-8\""]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Bearer realm=\"security\"","ApiKey","Basic realm=\"security\" charset=\"UTF-8\""]}},"status":401}/etc/elasticsearch# /etc/elasticsearch# # And then, with a username and password: /etc/elasticsearch# curl -k -u akeneo_pimee:akeneo_pimee https://localhost:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } /etc/elasticsearch# # Nice!
Let’s test it from another host:
$ curl -k -u akeneo_pimee:akeneo_pimee https://rpi4-4g-elasticsearch.donaldbales.com:9200 { "name" : "rpi4-4g-elasticsearch", "cluster_name" : "elasticsearch", "cluster_uuid" : "ubshELm_TfShUFLFWO9Kpg", "version" : { "number" : "7.8.1", "build_flavor" : "default", "build_type" : "deb", "build_hash" : "b5ca9c58fb664ca8bf9e4057fc229b3396bf3a89", "build_date" : "2020-07-21T16:40:44.668009Z", "build_snapshot" : false, "lucene_version" : "8.5.1", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } $ # Groovy, and I don't mean the programming language.
At this point, we’ve installed Elasticsearch on its own host, changed its configuration to be accessible to outside networks, set up SSL, and then set up basic authentication. Now it is secured. It’s time for us to move onto the Akeneo PIM work.
ON THE AKENEO PIM WEB HOST
Hacking the Elasticsearch Client
I submitted a contribution, https://github.com/akeneo/pim-community-dev/pull/12614, Added support for an SSL connection to the Elasticsearch Client, with a modified version of the Client.__construct() function that supports our new SSL option:
From: vendor/akeneo/pim-community-dev/src/Akeneo/Tool/Bundle/ElasticsearchBundle/Client.php
/** * Configure the PHP Elasticsearch client. * To learn more, please see {@link https://www.elastic.co/guide/en/elasticsearch/client/php-api/current/_configuration.html} * * @param ClientBuilder $builder * @param Loader $configurationLoader * @param array $hosts * @param string $indexName */ public function __construct( ClientBuilder $builder, Loader $configurationLoader, array $hosts, $indexName, string $idPrefix = '' ) { $this->builder = $builder; $this->configurationLoader = $configurationLoader; $this->hosts = $hosts; $this->indexName = $indexName; $this->idPrefix = $idPrefix; $builder->setHosts($hosts); // Added by Don Bales to support SSL connection with Elasticsearch $sslCa = getenv('APP_INDEX_SSL_CA'); if (isset($sslCa)) { $builder->setSSLVerification($sslCa); } // end of add. $this->client = $builder->build(); }
If the pull request has not been incorporated into the Akeneo PIM, you can edit the file directly.
~/pim-enterprise-standard$ vim vendor/akeneo/pim-community-dev/src/Akeneo/Tool/Bundle/ElasticsearchBundle/Client.php ~/pim-enterprise-standard$ # This patch will require us to install the CA Bundle ~/pim-enterprise-standard$ php -d memory_limit=4G /usr/local/bin/composer require composer/ca-bundle Warning from https://repo.packagist.org: You are using an outdated version of Composer. Composer 2 is now available and you should upgrade. See https://getcomposer.org/2 Warning from https://repo.packagist.org: You are using an outdated version of Composer. Composer 2 is now available and you should upgrade. See https://getcomposer.org/2 Using version ^1.2 for composer/ca-bundle ./composer.json has been updated Loading composer repositories with package information Warning from https://repo.packagist.org: You are using an outdated version of Composer. Composer 2 is now available and you should upgrade. See https://getcomposer.org/2 Updating dependencies (including require-dev) Package operations: 1 install, 0 updates, 0 removals As there is no 'unzip' command installed zip files are being unpacked using the PHP zip extension. This may cause invalid reports of corrupted archives. Besides, any UNIX permissions (e.g. executable) defined in the archives will be lost. Installing 'unzip' may remediate them. - Installing composer/ca-bundle (1.2.9): Downloading (100%) Package doctrine/doctrine-cache-bundle is abandoned, you should avoid using it. No replacement was suggested. Package doctrine/reflection is abandoned, you should avoid using it. Use roave/better-reflection instead. Package guzzlehttp/ringphp is abandoned, you should avoid using it. No replacement was suggested. Package guzzlehttp/streams is abandoned, you should avoid using it. No replacement was suggested. Package twig/extensions is abandoned, you should avoid using it. No replacement was suggested. Package zendframework/zend-code is abandoned, you should avoid using it. Use laminas/laminas-code instead. Package zendframework/zend-eventmanager is abandoned, you should avoid using it. Use laminas/laminas-eventmanager instead. Writing lock file Generating autoload files ocramius/package-versions: Generating version class... ocramius/package-versions: ...done generating version class 63 packages you are using are looking for funding. Use the `composer fund` command to find out more! Symfony recipes are disabled: "symfony/flex" not found in the root composer.json > bash vendor/akeneo/pim-enterprise-dev/std-build/install-required-files.sh src/ directory already exists. Not preparing the directory content. Nothing to unpack ~/pim-enterprise-standard$ # Next, let's add the new Elasticsearch host to our /etc/hosts file: ~/pim-enterprise-standard$ sudo vim /etc/hosts
# The following is added for access to rpi4-4g-elasticsearch
192.168.0.4 rpi4-4g-elasticsearch rpi4-4g-elasticsearch.donaldbales.com
~/pim-enterprise-standard$ sudo cat /etc/hosts 127.0.0.1 localhost 127.0.0.1 akeneo-pim.local # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts # The following is added for access to rpi4-4g-elasticsearch 192.168.0.4 rpi4-4g-elasticsearch rpi4-4g-elasticsearch.donaldbales.com ~/pim-enterprise-standard$ # and, restart systemd-hostnamed to pick up the change in /etc/hosts ~/pim-enterprise-standard$ sudo systemctl restart systemd-hostnamed ~/pim-enterprise-standard$ # Now let's copy the CA Cert we created on the Elasticsearch host: ~/pim-enterprise-standard$ scp ubuntu@rpi4-4g-elasticsearch:/home/ubuntu/rpi4-4g-elasticsearch.donaldbales.com.ca.pem . ubuntu@rpi4-4g-elasticsearch's password: rpi4-4g-elasticsearch.ca.pem 100% 2821 479.6KB/s 00:00 ~/pim-enterprise-standard$ # Next, let's add the CA Cert to the .env configuration file: ~/pim-enterprise-standard$ vim .env
APP_INDEX_HOSTS=https://akeneo_pimee:[email protected]:9200
APP_INDEX_SSL_CA=/home/ubuntu/pim-enterprise-standard/rpi4-4g-elasticsearch.donaldbales.com.ca.pem
~/pim-enterprise-standard$ cat .env APP_ENV=prod APP_DEBUG=0 APP_DATABASE_HOST=localhost APP_DATABASE_PORT=3306 APP_DATABASE_NAME=akeneo_pim APP_DATABASE_USER=akeneo_pim APP_DATABASE_PASSWORD=akeneo_pim APP_DEFAULT_LOCALE=en APP_SECRET=ThisTokenIsNotSoSecretChangeIt #APP_INDEX_HOSTS=localhost:9200 # We change the host to point to our new elasticsearch host and add the cert APP_INDEX_HOSTS=https://akeneo_pimee:[email protected]:9200 APP_INDEX_SSL_CA=/home/ubuntu/pim-enterprise-standard/rpi4-4g-elasticsearch.donaldbales.com.ca.pem # end of change APP_PRODUCT_PROPOSAL_INDEX_NAME=akeneo_pim_product_proposal APP_PUBLISHED_PRODUCT_INDEX_NAME=akeneo_pim_published_product APP_PRODUCT_AND_PRODUCT_MODEL_INDEX_NAME=akeneo_pim_product_and_product_model APP_RECORD_INDEX_NAME=akeneo_referenceentity_record APP_ASSET_INDEX_NAME=akeneo_assetmanager_asset MAILER_URL=localhost:25 AKENEO_PIM_URL=http://localhost:8080 CONTAINER_NAME=container K8S_CLUSTER_NAME=cluster GOOGLE_LOCATION=location GOOGLE_NAMESPACE=namespace GOOGLE_POD_NAME=pod GOOGLE_CLOUD_PROJECT=project SRNT_GOOGLE_APPLICATION_CREDENTIALS='/srv/pim/config/fake_credentials_gcp.json' SRNT_GOOGLE_BUCKET_NAME=bucket HOSTNAME=localhost PHP_VERSION=7.2 MEMCACHED_SVC=memcached MONITORING_AUTHENTICATION_TOKEN=secret_key BLACKFIRE_SERVER_ID=fake BLACKFIRE_SERVER_TOKEN=fake BLACKFIRE_CLIENT_ID=fake BLACKFIRE_CLIENT_TOKEN=fake ASPELL_BINARY_PATH=aspell ~/pim-enterprise-standard$ # We need to shut down our local elasticsearch install if it is running: ~/pim-enterprise-standard$ sudo service elasticsearch stop ~/pim-enterprise-standard$ # Let's test by checking Akeneo's requirements: ~/pim-enterprise-standard$ bin/console pim:installer:check-requirements Akeneo PIM requirements check: +---------+------------------------+ | Check | Mandatory requirements | +---------+------------------------+ +---------+--------------------------------------------+ | Check | PHP requirements | +---------+--------------------------------------------+ | OK | detect_unicode must be disabled in php.ini | | OK | string functions should not be overloaded | +---------+--------------------------------------------+ +---------+--------------------------------------------------------------------------------------+ | Check | Pim requirements | +---------+--------------------------------------------------------------------------------------+ | OK | PHP version must be at least 5.5.9 (7.3.20-1+ubuntu20.04.1+deb.sury.org+1 installed) | | OK | Vendor libraries must be installed | | OK | var/cache/ directory must be writable | | OK | Configured default timezone "UTC" must be supported by your installation of PHP | | OK | iconv() must be available | | OK | json_encode() must be available | | OK | session_start() must be available | | OK | ctype_alpha() must be available | | OK | token_get_all() must be available | | OK | simplexml_import_dom() must be available | | OK | APC version must be at least 3.1.13 when using PHP 5.4 | | OK | detect_unicode must be disabled in php.ini | | OK | PCRE extension must be available | | OK | string functions should not be overloaded | | OK | PHP version must be at least 7.3.0 (7.3.20-1+ubuntu20.04.1+deb.sury.org+1 installed) | | OK | apcu extension should be available | | OK | bcmath extension should be available | | OK | curl extension should be available | | OK | fileinfo extension should be available | | OK | gd extension should be available | | OK | intl extension should be available | | OK | pdo_mysql extension should be available | | OK | xml extension should be available | | OK | zip extension should be available | | OK | exif extension should be available | | OK | imagick extension should be available | | OK | mbstring extension should be available | | OK | openssl extension should be available | | OK | GD extension must be at least 2.0 | | OK | Ghostscript executable must be at least 9.27 | | OK | Aspell executable must be available | | OK | icu library must be at least 4.2 | | OK | memory_limit should be at least 512M | | OK | MySQL version must be greater or equal to 8.0.18 and lower than 8.1.0 | | OK | Check support for correct innodb_page_size for utf8mb4 support | | OK | The exec() function should be enabled in order to run jobs | +---------+--------------------------------------------------------------------------------------+ +---------+--------------------------------------------------------------------------------------------------------------------------+ | Check | Recommendations | +---------+--------------------------------------------------------------------------------------------------------------------------+ | OK | PCRE extension should be at least version 8.0 (10.34 installed) | | OK | PHP-DOM and PHP-XML modules should be installed | | OK | mb_strlen() should be available | | OK | utf8_decode() should be available | | OK | filter_var() should be available | | OK | posix_isatty() should be available | | OK | intl extension should be available | | OK | intl extension should be correctly configured | | OK | intl ICU version should be at least 4+ | | OK | intl ICU version installed on your system is outdated (66.1) and does not match the ICU data bundled with Symfony (66.1) | | OK | intl ICU version installed on your system (66.1) does not match the ICU data bundled with Symfony (66.1) | | OK | intl.error_level should be 0 in php.ini | | OK | a PHP accelerator should be installed | | OK | short_open_tag should be disabled in php.ini | | OK | magic_quotes_gpc should be disabled in php.ini | | OK | register_globals should be disabled in php.ini | | OK | session.auto_start should be disabled in php.ini | | OK | xdebug.max_nesting_level should be above 100 in php.ini | | OK | "memory_limit" should be greater than "post_max_size". | | OK | "post_max_size" should be greater than "upload_max_filesize". | | OK | PDO should be installed | | OK | PDO should have some drivers installed (currently available: mysql) | | OK | cURL extension must be at least 7.0 | | OK | APCu should be enabled in CLI to get better performances | +---------+--------------------------------------------------------------------------------------------------------------------------+ ~/pim-enterprise-standard$ # And finally, build the indexes on our external elasticsearch host ~/pim-enterprise-standard$ bin/console akeneo:elasticsearch:reset-indexes This action will entirely reset the following indexes in the PIM: akeneo_pim_product_and_product_model akeneo_pim_product_proposal akeneo_pim_published_product akeneo_referenceentity_record akeneo_assetmanager_asset Are you sure you want to proceed ? (Y/n)Y Resetting the index: akeneo_pim_product_and_product_model Resetting the index: akeneo_pim_product_proposal Resetting the index: akeneo_pim_published_product Resetting the index: akeneo_referenceentity_record Resetting the index: akeneo_assetmanager_asset All the indexes have been successfully reset! You can now use the command pim:product:index and pim:product-model:index to start re-indexing your product and product models. ~/pim-enterprise-standard$ bin/console akeneo:asset-manager:index-assets --all The assets of 6 asset families have been indexed. ~/pim-enterprise-standard$ bin/console akeneo:reference-entity:index-records --all The records of 7 reference entities have been indexed. ~/pim-enterprise-standard$ bin/console pim:product-model:index --all 50/50 [============================] 100% 50 product models indexed ~/pim-enterprise-standard$ bin/console pim:product:index --all 1239/1239 [============================] 100% 1239 products indexed ~/pim-enterprise-standard$ bin/console pimee:product-proposal:index 0 product proposals to index 0 [>---------------------------] 0 product model proposals to index 0 [>---------------------------] ~/pim-enterprise-standard$ bin/console pimee:published-product:index 0 published products to index 0 published products indexed ~/pim-enterprise-standard$ # Now we can restart the php fpm services, and test our Akeneo PIM through a browser. ~/pim-enterprise-standard$ sudo service php7.3-fpm restart
Now you know how to configure your Akeneo PIM to use SSL with Elasticsearch using XPack.
Good skill!