Category Archives: General

Setting up a lubuntu desktop

First get you a vps from a quality company, example Digital Ocean.

Once you have your vps, reload the OS to Ubuntu 18.X

Lubuntu Desktop Installation

Login to your VPS server as root via SSH.

Firstly, update system packages by issuing following command:

apt-get update

Then Install the Lubuntu Desktop:

apt-get install lubuntu-desktop

It may take about 10 -15 minutes for the installation. After the installation, you’ll have a full-fledged desktop environment with some handy applications like AbiWord for docs and Firefox for web browsing.

Then reboot your server for changes to be take effect:

reboot

Now add your first user.

adduser username

Now lets install the remote desktop server.

First open a terminal and enter sudo apt-get install xrdp.

When that is installed enter sudo nano /etc/xrdp/startwm.sh in the terminal. Make sure the last line looks like this:

. /etc/X11/Xsession

Then go to your home folder, rightclick and select Show hidden. If there is no file named .xsession, create it. If there is a filed named like that, open it and make sure that it looks like this when your done: lxsession -e LXDE -s Lubuntu

Now type sudo service xrdp restart in the terminal to restart xrdp. Now it should work 🙂

Now lets install Google Chrome software for all to use.

From the ssh command line that you should still be logged in at.

The list of commands to run one by one from command prompt:

  1. sudo wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add –
  2. sudo sh -c ‘echo “deb http://dl.google.com/linux/chrome/deb/ stable main” >> /etc/apt/sources.list.d/google.list’
  3. sudo apt-get update
  4. sudo apt-get install google-chrome-stable

The first time when you run the command,it will ask for admin password. Run the above commands one by one, finally You can see Chrome tab under Applications -> Internet -> Google Chrome

Adding Additional Disk Drives to CentOS

Adding a new drive to CentOS or RedHat systems.

Making use of a second drive for extra space? Here’s a quick run-down:

1) Make sure you know which disk is being formatted. First, second, and third drives will be /dev/sda, /dev/sdb, and /dev/sdc respectively. Check this with fdisk -l

[03:50:04] [root@virt ~]# fdisk -l

Disk /dev/sda: 34.3 GB, 34359738368 bytes
255 heads, 63 sectors/track, 4177 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        4177    33447330   8e  Linux LVM

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

2) You can see that /dev/sdb (our second hard drive) does not have any partitions. We will need to create a partition(s) on the drive and then make a file system on it, then mount it. Let’s write partitions to the drive using fdisk /dev/sdb:

[03:53:01] [root@virt ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help):

3) As you can see from the help menu (by using the command “m”) we want to add a new partition. Using the defaults will use the entire disk. After it’s created, you will want to use the command “w” to “write table to disk and exit”.

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1044, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1044, default 1044): 
Using default value 1044

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[03:54:58] [root@virt ~]#

4) Now you will notice that the output of fdisk -l /dev/sdb shows a partition as /dev/sdb1:

[03:57:08] [root@virt ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1044     8385898+  83  Linux

5) Now we need to create a file system on it. I’ve always used ext3 for general use/purposes. You’ll want to use the command mkfs -t ext3 /dev/sdb1 as shown here:

[03:58:38] [root@virt ~]# mkfs -t ext3 /dev/sdb1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1048576 inodes, 2096474 blocks
104823 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2147483648
64 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

6) Great, now we have a single partitioned secondary drive using ext3 file system. Now you want to create a directory to mount it in; lets just use “/drive2”. You’ll need to use the command mount -t [filesystem] [source] [mount directory] to mount it.

[03:59:50] [root@virt ~]# mount -t ext3 /dev/sdb1 /drive2/

7) Now you’ll notice, via df, that the drive is mounted:

[03:59:57] [root@virt ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       28G  1.4G   25G   6% /
/dev/sda1              99M   19M   76M  20% /boot
tmpfs                1014M     0 1014M   0% /dev/shm
/dev/sdb1             7.9G  147M  7.4G   2% /drive2

8) Last step – you want to make sure the drive automatically mounts itself when the server boots/reboots. You’ll need to add the following line to your /etc/fstab file:

/dev/sdb1  /drive2  ext3  defaults 0 0

.

All done!

 

Source: https://dbiers.me/add-new-drive-to-centos/

SolusVM Mass Starting/Stopping Virtual Servers

If you need to mass start your virtual servers, run the following code in SSH on the host:

Xen PV/HVM

START

CFGS=/home/xen/vm*/;for cfg in $CFGS;do xm create $cfg*.cfg;done

STOP

xm shutdown -aw

OpenVZ

START

CFGS=`vzlist -S -Ho ctid`;for cfg in $CFGS;do vzctl start $cfg;done

STOP

CFGS=`vzlist -S -Ho ctid`;for cfg in $CFGS;do vzctl stop $cfg;done

KVM

CFGS=/home/kvm/kvm*/;for cfg in $CFGS;do virsh create $cfg*.xml;done

Using Screen

Screen is like a window manager for your console. It will allow you to keep multiple terminal sessions running and easily switch between them. It also protects you from disconnection, because the screen session doesn’t end when you get disconnected.

You’ll need to make sure that screen is installed on the server you are connecting to. If that server is Ubuntu or Debian, just use this command:

sudo apt-get install screen

Now you can start a new screen session by just typing screen at the command line. You’ll be shown some information about screen. Hit enter, and you’ll be at a normal prompt.

To disconnect (but leave the session running)

Hit Ctrl + A and then Ctrl + D in immediate succession. You will see the message [detached]

To reconnect to an already running session

screen -r

To reconnect to an existing session, or create a new one if none exists

screen -D -r

To create a new window inside of a running screen session

Hit Ctrl + A and then C in immediate succession. You will see a new prompt.

To switch from one screen window to another

Hit Ctrl + A and then Ctrl + A in immediate succession.

To list open screen windows

Hit Ctrl + A and then W in immediate succession

There’s lots of other commands, but those are the ones I use the most.

 

Source: https://www.howtogeek.com/howto/ubuntu/keep-your-ssh-session-running-when-you-disconnect/

scp examples

Example syntax for Secure Copy (scp)

What is Secure Copy?

scp allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.

Examples

Copy the file “foobar.txt” from a remote host to the local host

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Copy the file “foobar.txt” from the local host to a remote host

$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy the directory “foo” from the local host to a remote host’s directory “bar”

$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar

Copy the file “foobar.txt” from remote host “rh1.edu” to remote host “rh2.edu”

$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \
your_username@rh2.edu:/some/remote/directory/

Copying the files “foo.txt” and “bar.txt” from the local host to your home directory on the remote host

$ scp foo.txt bar.txt your_username@remotehost.edu:~

Copy the file “foobar.txt” from the local host to a remote host using port 2264

$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy multiple files from the remote host to your current directory on the local host

$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .
$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .

scp Performance

By default scp uses the Triple-DES cipher to encrypt the data being sent. Using the Blowfish cipher has been shown to increase speed. This can be done by using option -c blowfish in the command line.

$ scp -c blowfish some_file your_username@remotehost.edu:~

It is often suggested that the -C option for compression should also be used to increase speed. The effect of compression, however, will only significantly increase speed if your connection is very slow. Otherwise it may just be adding extra burden to the CPU. An example of using blowfish and compression:

$ scp -c blowfish -C local_file your_username@remotehost.edu:~

** Source – http://www.hypexr.org/linux_scp_help.php

Command Line Ping Sweep

Sometimes it can be handy to ‘see’ what is around you on a network, for instance when you’re using DHCP on a network and you want to find which addresses are already taken. Or you want to see whether specific machines are up and running… Of course there are various tools you can install or use, but there are times that you just can’t reach for the right tool(s)… You don’t want to do a ping sweep for nothing.
Of course there is a way of ping sweeping from the command line By simply using the ‘FOR’, ‘PIPE’ and ‘GREP’ commands in a clever way.

for i in {1..254}; do ping -c 1 -W 1 192.168.1.$i | grep ‘from’; done

Naturally you can stop the ping sweep by entering Ctrl+z

Full Server Backup and Move Using rsync

Need to make a quick backup of your server? Maybe even move all your important files? Use the following commands to backup or replicate your entire Linux server without any pain. If you are sending the backup to a remote server, use the second command. Both commands will backup non-system files (not software, only settings and user files) which is perfect if you are migrating an entire server or just need a very usable backup that keeps your directory structure completely intact all the way to the root folder. We do recommend that this command only be ran by a qualified Linux technician, there is a very real possibility that if you do not know where you are putting the files, you can overwrite files that you actually don’t want overridden so be sure to use these commands with care. Also recommended if you are replicating that you replicate to a operating system that is the same as the original machine to prevent any oddities.

When in doubt, contact Mean Servers for a quote, we can do it for you!

Local Backup: rsync -aAXv –exclude={“/dev/*”,”/proc/*”,”/sys/*”,”/tmp/*”,”/run/*”,”/mnt/*”,”/media/*”,”/lost+found”} / /path/to/backup/folder

*Be sure to replace the /path/to/backup/folder in the first one with the location of where you wish to place the backup of.

Remote Backup: rsync -aAXv –exclude={“/dev/*”,”/proc/*”,”/sys/*”,”/tmp/*”,”/run/*”,”/mnt/*”,”/media/*”,”/lost+found”} / root@<remote ip>:/path/to/remote/backup/folder

*Be sure to replace the <remove ip> with the IP address of the server you are sending the backup to and /path/to/remote/backup/folder with the full path to the remote backup folder location

Installing CSF (ConfigServer Security & Firewall)

The CSF, ConfigServer Security & Firewall, is a powerful firewall that is made for Linux systems. It comes with an easy to use CLI (Command Line Interface) and integrates with DirectAdmin and cPanel control panels as a graphical user interface (GUI) that is just as powerful as it’s CLI counterpart. Installing CSF is easy but as with making changes to any server, if you do not know what you are doing, you can cause damage. If you are unfamiliar with using the command line interface via SSH, we recommend contacting our sales department to get an installation quote.

Note that this tutorial was written for Linux systems running a RedHat variant such as RHEL or CentOS. If you are using a different Linux flavor, you may need to adapt certain commands to work with your system. As with all our tutorials, this is written as-is and comes with no warranty or support what so ever. Should you rather Mean Servers install CSF for you, please contact the sales department for a quote.

Installing CSF

1.) Login to your server via SSH.

2.) Obtain the latest CSF package directly from ConfigServer by running: wget https://download.configserver.com/csf.tgz

3.) Untar the downloaded package by running: tar -xvzf csf.tgz

4.) Change directories to the newly created csf directly by running: cd csf/

If you are installing on a DirectAdmin system, proceed to Step 5-DA. If you are installing on a cPanel system, proceed to Step 5-CP. If you are installing on a Linux system without a control panel, proceed to Step 5-NP.

5-DA.) Run the following command: ./install.directadmin.sh

5-CP.) Run the following command: ./install.cpanel.sh

5-NP.) Run the following command: ./install.sh

6.) CSF has now been installed. If you wish to add the graphing ability, run the following command: yum install perl-GDGraph

7.) CSF now needs to be configured, which is beyond the scope of this article. Luckily, CSF is pretty much self explanatory when using either the CLI or GUI. Just read the instructions or hints if you are unsure what the purpose of a certain function is.

You can configure CSF via CLI or the GUI when using DirectAdmin/cPanel and the CLI if you do not have a control panel installed. The DirectAdmin GUI is located at the Admin Level under Extra Features titled ConfigServer Firewall&Security. In cPanel, the CSF GUI is available from the left handed menu, just search for ConfigServer Firewall&Security. If you are using the CLI, you can access CSF by running csf from command line. Running csf will show you a list of available commands along with helpful hints.

**Important**: Do not forget to whitelist your IP address and take CSF out of test mode once you have tested your settings to ensure they work as you expected.

Synchronizing folders with rsync

** 100% Credit to Juan Valencia – http://www.jveweb.net/en/archives/2010/11/synchronizing-folders-with-rsync.html

 

 

The basics of rsync

rsync is a very versatile copying and backup tool that is included by default in almost every Linux distribution. It can be used as an advanced copying tool, allowing us to copy files both locally and remotely. It can also be used as a backup tool. It supports the creation of incremental backups.

rsync counts with a famous delta-transfer algorithm that allows us to transfer new files as well as recent changes to existent files, while ignoring unchanged files. Additionally to this, the behavior of rsync can be throughly customized, helping us to automatize backups, it can also be run as a daemon to turn the computer into a host and allow rsync clients connect to it.

Besides the copying of local files and folders, rsync allow us to copy over SSH (Secure Shell), RSH (Remote Shell) and it can be run as a daemon in a computer and allow other computers to connect to it, when rsync is run as a daemon it listens to the port TCP 873.

When we use rsync as a daemon or when we use RSH, the data that is send between computers travels unencrypted, so, if you are transferring files between two computers in the same local network, this is useful, but this shouldn’t be used to transfer files over insecure networks, such as the Internet. For this purpose SSH is the way to go.

This is the main reason why I favor the use of SSH for my transfers, besides, since SSH is secure, many servers have the SSH daemon available. But the use of rsync as a daemon for transfers over fast connections, as is usually the case in a local network, is useful. I don’t have the RSH daemon running in my computers so you may find me a bit biased about SSH in the examples. The examples covering the transfer of files between two computers use SSH as the medium of transport, but in a separate post I cover the use of rsync as a daemon.

Copying local files and folders

To copy the contents of one local folder into another, replacing the files in the destination folder, we use:

rsync -rtv source_folder/ destination_folder/

In the source_folder notice that I added a slash at the end, doing this prevents a new folder from being created, if we don’t add the slash, a new folder named as the source folder will be created in the destination folder. So, if you want to copy the contents of a folder called Pictures into an existent folder which is also called Pictures but in a different location, you need to add the trailing slash, otherwise, a folder called Pictures is created inside the Pictures folder that we specify as destination.

 

The parameter -r means recursive, this is, it will copy the contents of the source folder, as well as the contents of every folder inside it.

The parameter -t makes rsync preserve the modification times of the files that it copies from the source folder.

The parameter -v means verbose, this parameter will print information about the execution of the command, such as the files that are successfully transferred, so we can use this as a way to keep track of the progress of rsync.

This are the parameters that I frequently use, as I am usually backing up personal files and this doesn’t contain things such as symlinks, but another very useful parameter to use rsync with is the parameter -a.

rsync -av source/ destination/

The parameter -a also makes the copy recursive and preserve the modification times, but additionally it copies the symlinks that it encounters as symlinks, preserve the permissions, preserve the owner and group information, and preserve device and special files. This is useful if you are copying the entire home folder of a user, or if you are copying system folders somewhere else.

Dealing with whitespace and rare characters

We can escape spaces and rare characters just as in bash, by the use of \ before any whitespace and rare character. Additionally, we can use single quotes to enclose the string:

rsync -rtv so\{ur\ ce/ dest\ ina\{tion/
rsync -rtv 'so{ur ce/' 'dest ina{tion/'

Update the contents of a folder

In order to save bandwidth and time, we can avoid copying the files that we already have in the destination folder that have not been modified in the source folder. To do this, we can add the parameter -u to rsync, this will synchronize the destination folder with the source folder, this is where the delta-transfer algorithm enters. To synchronize two folders like this we use:

rsync -rtvu source_folder/ destination_folder/

By default, rsync will take into consideration the date of modification of the file and the size of the file to decide whether the file or part of it needs to be transferred or if the file can be left alone, but we can instead use a hash to decide whether the file is different or not. To do this we need to use the -c parameter, which will perform a checksum in the files to be transferred. This will skip any file where the checksum coincides.

rsync -rtvuc source_folder/ destination_folder/

Synchronizing two folders with rsync

To keep two folders in synchrony, not only do we need to add the new files in the source folder to the destination folder, as in the past topics, we also need to remove the files that are deleted in the source folder from the destination folder. rsync allow us to do this with the parameter --delete, this used in conjunction with the previously explained parameter -u that updates modified files allow us to keep two directories in synchrony while saving bandwidth.

rsync -rtvu --delete source_folder/ destination_folder/

The deletion process can take place in different moments of the transfer by adding some additional parameters:

  • rsync can look for missing files and delete them before it does the transfer process, this is the default behavior and can be set with --delete-before
  • rsync can look for missing files after the transfer is completed, with the parameter --delete-after
  • rsync can delete the files done during the transfer, when a file is found to be missing, it is deleted at that moment, we enable this behavior with --delete-during
  • rsync can do the transfer and find the missing files during this process, but instead of delete the files during this process, waits until it is finished and delete the files it found afterwards, this can be accomplished with --delete-delay

e.g.:

rsync -rtvu --delete-delay source_folder/ destination_folder/

Compressing the files while transferring them

To save some bandwidth, and usually it can save some time as well, we can compress the information being transfer, to accomplish this we need to add the parameter -z to rsync.

rsync -rtvz source_folder/ destination_folder/

Note, however, that if we are transferring a large number of small files over a fast connection, rsync may be slower with the parameter -z than without it, as it will take longer to compress every file before transfer it than just transferring over the files. Use this parameter if you have a a connection with limited speed between two computers, or if you need to save bandwidth.

Transferring files between two remote systems

rsync can copy files and synchronize a local folder with a remote folder in a system running the SSH daemon, the RSH daemon, or the rsync daemon. The examples here use SSH for the file transfers, but the same principles apply if you want to do this with rsync as a daemon in the host computer, read Running rsync as a daemon when ssh is not available further below for more information about this.

To transfer files between the local computer and a remote computer, we need to specify the address of the remote system, it may be a domain name, an IP address, or a the name of a server that we have already saved in our SSH config file (information about how to do this can be found in Defining SSH servers), followed by a colon, and the folder we want to use for the transfer. Note that rsync can not transfer files between two remote systems, only a local folder or a remote folder can be used in conjunction with a local folder. To do this we use:

Local folder to remote folder, using a domain, an IP address and a server defined in the SSH configuration file:
rsync -rtvz source_folder/ user@domain:/path/to/destination_folder/
rsync -rtvz source_folder/ user@xxx.xxx.xxx.xxx:/path/to/destination_folder/
rsync -rtvz source_folder/ server_name:/path/to/destination_folder/

Remote folder to local folder, using a domain, an IP address and a server defined in the SSH configuration file:
rsync -rtvz user@domain:/path/to/source_folder/ destination_folder/
rsync -rtvz user@xxx.xxx.xxx.xxx:/path/to/source_folder/ destination_folder/
rsync -rtvz server_name:/path/to/source_folder/ destination_folder/

Excluding files and directories

There are many cases in which we need to exclude certain files and directories from rsync, a common case is when we synchronize a local project with a remote repository or even with the live site, in this case we may want to exclude some development directories and some hidden files from being transfered over to the live site. Excluding files can be done with --exclude followed by the directory or the file that we want to exclude. The source folder or the destination folder can be a local folder or a remote folder as explained in the previous section.

rsync -rtv --exclude 'directory' source_folder/ destination_folder/
rsync -rtv --exclude 'file.txt' source_folder/ destination_folder/
rsync -rtv --exclude 'path/to/directory' source_folder/ destination_folder/
rsync -rtv --exclude 'path/to/file.txt' source_folder/ destination_folder/

The paths are relative to the folder from which we are calling rsync unless it starts with a slash, in which case the path would be absolute.

Another way to do this is to create a file with the list of both files and directories to exclude from rsync, as well as patterns (all files that would match the pattern would be excluded, *.txt would exclude any file with that extension), one per line, and call this file with --exclude-from followed by the file that we want to use for the exclusion of files. First, we create and edit this file in our favorite text editor, in this example I use gedit, but you may use kate, Vim, nano, or any other text editor:

touch excluded.txt
gedit excluded.txt

In this file we can add the following:

directory
relative/path/to/directory
file.txt
relative/path/to/file.txt
/home/juan/directory
/home/juan/file.txt
*.swp

And then we call rsync:

rsync -rvz --exclude-from 'excluded.txt' source_folder/ destination_folder/

In addition to delete files that have been removed from the source folder, as explained inSynchronizing two folders with rsync, rsync can delete files that are excluded from the transfer, we do this with the parameter --delete-excluded, e.g.:

rsync -rtv --exclude-from 'excluded.txt' --delete-excluded source/ destination/

This command would make rsync recursive, preserve the modification times from the source folder, increase verbosity, exclude all the files that match the patterns in the file excluded.txt, and delete all of this files that match the patternif they exist in the destination folder.

Running rsync as a daemon when ssh is not available

This was moved to it’s own section, Running rsync as a daemon.

Some additional rsync parameters

-t Preserves the modification times of the files that are being transferred.
-q Suppress any non-error message, this is the contrary to -v which increases the verbosity.
-d Transfer a directory without recursing, this is, only the files that are in the folder are transferred.
-l Copy the symlinks as symlinks.
-L Copy the file that a symlink is pointing to whenever it finds a symlink.
-W Copy whole files. When we are using the delta-transfer algorithm we only copy the part of the archive that was updated, sometimes this is not desired.
--progress Shows the progress of the files that are being transferred.
-h Shows the information that rsync provides us in a human readable format, the amounts are given in K’s, M’s, G’s and so on.

Footnotes

The amount of options that rsync provide us is immense, we can define exactly which files we want to transfer, what specific files we want to compress, what files we want to delete in the destination folder if this files exists, and we can deal with system files as well, for more information we can use man rsync and man rsyncd.conf

I leave the information concerning backups out of this post, as this will be covered, together with the automation of the backups, in an incoming post.

It is possible to run rsync on Windows with the use of cygwin, however I don’t have a Windows box available at the moment (nor do I plan to aquire one in the foreseeable future), so even thought I have done it I can’t post about this. If you run rsync as a service in Windows tho, you need to add the line “strict mode = false” in rsyncd.conf under the modules area, this will prevent rsync from checking the permissions in the secrets file and thus failing because they are not properly set (as they don’t work the same as in Linux).

Rsync with a non-standard ssh port

After some searching, the man page of rsync finally offered a solution:

# rsync -avz -e “ssh -p $portNumber” user@remoteip:/path/to/files/ /local/path/

or

# rsync -avz -e “ssh -p $portNumber” /local/folder user@remoteip:/path/to/files

Passing the port parameter to ssh with the -e option worked like a charm. 

This is why Unix rocks.