Google Drive on Linux with rclone

Rclone logo

Probably you are waiting for the official Google Drive client for Linux as me, but unfortunately the time that this article is written, we don’t have one. There are a few other piece of softwares that can do the job also GNOME supports it, but for my personal taste rclone works fine!

I use Debian 11 at this point, but most probably should work on any distro, let’s install it!

Install

apt-get install rclone

Setup credentials with Google API, it works without but during my tests it was much slower.

Create the OAuth 2.0 Client IDs on Google Cloud Console

Step 1

Google API add credentials

Step 2

Google API add credentials add

Step 3

Google API add credentials add name

Step 4, Client ID and Secret will be used while configuring the rclone

Rclone configuration, follow the wizard.

rclone config
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> My_GoogleDrive
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
...
11 / FTP Connection
   \ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
13 / Google Drive
   \ "drive"
14 / Google Photos
   \ "google photos"
...
Storage> 13
** See help for drive backend at: https://rclone.org/drive/ **
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a string value. Press Enter for the default ("").
client_id> Type your Client ID here!
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret> Type your Client Secret here!
Scope that rclone should use when requesting access from drive.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Full access all files, excluding Application Data Folder.
   \ "drive"
 2 / Read-only access to file metadata and file contents.
   \ "drive.readonly"
   / Access to files created by rclone only.
 3 | These are visible in the drive website.
   | File authorization is revoked when the user deauthorizes the app.
   \ "drive.file"
   / Allows read and write access to the Application Data folder.
 4 | This is not visible in the drive website.
   \ "drive.appfolder"
   / Allows read-only access to file metadata but
 5 | does not allow any access to read or download file content.
   \ "drive.metadata.readonly"
scope> 1
ID of the root folder
Leave blank normally.

In the next section, I usually use the default options (press Enter) but you can customize it for your needs.

Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.

Enter a string value. Press Enter for the default ("").
root_folder_id> 
Service Account Credentials JSON file path 
Leave blank normally.
Needed only if you want use SA instead of interactive login.

Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.

Enter a string value. Press Enter for the default ("").
service_account_file> 
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> 
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n> 
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=2YFW_P5Kf1TC4YD3I1jMCg
Log in and authorize rclone for access
Waiting for code...

At this point your browser should open to ask access for the rclone app.

Allow rclone to your Google Account

Once you accepted the configuration is almost done.

Got code
Configure this as a team drive?
y) Yes
n) No (default)
y/n>

Rclone successfully installed

We are almost done, let’s setup the systemd service and start it!

sudo vim /lib/systemd/system/rclone.service
[Unit]
Description=Rclone
Requires=network-online.target
After=network-online.target
 
[Service]
User=your_user_here
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/rclone mount My_GoogleDrive: ${HOME}/GoogleDrive --vfs-cache-mode full --daemon --config ${HOME}/.config/rclone/rclone.conf
ExecStop=/usr/bin/fusermount -u ${HOME}/GoogleDrive -z
ExecStartPre=/bin/sh -c 'until ping -c1 google.com > /dev/null; do sleep 1; done;'

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable rclone
sudo systemctl start rclone

Using multiple kubeconfig files and how to merge to a single

Reference: Kubernetes Documentation

When it’s the case that you are managing multiple Kubernetes clusters, you will have to do with multiple kubeconfig files.
There are multiple ways that you can deal with this, my favorite is to keep the files separate, you can achive this by:

Creating a folder and move the kubeconfig files there.

mkdir ~/.kube/clusters
mv /path/cluster1.config /path/cluster2.config ~/.kube/clusters

Add the $KUBECONFIG environment variable to your ~/.bashrc, the value has to be “/path/cluster1.config:/path/cluster2.config”.

export KUBECONFIG=$(find ~/.kube/clusters -type f | sed ':a;N;s/\n/:/;ba')

Start a new bash terminal in order to take effect or “source ~/.bashrc” in the current.

Check, you can see both clusters and of course you can switch to the one that you wish to use.

kubectl config get-clusters
NAME
cluster1
cluster2

Other way would be to merge the multiple kubeconfig files in one and store it in ~/.kube/config the default location that it’s used when the $KUBECONFIG environment variable it’s not set.

KUBECONFIG="/path/cluster1.config:/path/cluster2.config"
kubectl config view --flatten > ~/.kube/config

That being said, I prefer the first one because it makes life easier when adding or removing a cluster is needed by simply adding or removing files.