<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Security &#8211; Everything is Broken</title>
	<atom:link href="https://play.datalude.com/blog/category/security/feed/" rel="self" type="application/rss+xml" />
	<link>https://play.datalude.com/blog</link>
	<description>Efficiency vs. Inefficiency, in a no-holds barred fight.</description>
	<lastBuildDate>Wed, 10 Sep 2025 05:14:58 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>
	<item>
		<title>rrsync: a hidden gem</title>
		<link>https://play.datalude.com/blog/2025/09/rrsync-a-hidden-gem/</link>
					<comments>https://play.datalude.com/blog/2025/09/rrsync-a-hidden-gem/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 10 Sep 2025 05:14:57 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=764</guid>

					<description><![CDATA[I've been using rsync for decades, and had never come across its cousin rrsync, until a google search put it on my map. I was revisiting the inherent security problem in rsync backups: if you've given ssh access to a server, it can typically do a lot more than just rsync. To limit the damage, ... <a title="rrsync: a hidden gem" class="read-more" href="https://play.datalude.com/blog/2025/09/rrsync-a-hidden-gem/" aria-label="Read more about rrsync: a hidden gem">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">I've been using rsync for decades, and had never come across its cousin rrsync, until a google search put it on my map. I was revisiting the inherent security problem in rsync backups: if you've given ssh access to a server, it can typically do a lot more than just rsync. <br><br>To limit the damage, I'd already set up a 'pull' backup so the backup server grabs the files from the production server. If you do it the other way around, an attacker gaining access to your production server can delete that, plus all the remote backups! So I'd already taken one step in the right direction, but it wasn't enough. Which is where rrsync comes in. </p>



<p class="wp-block-paragraph">Its installed along with rsync, and resides in /usr/bin/rrsync (on Ubuntu at least). Its basically a wrapper that limits access for a remote rsync server to named directories, and can additionally specify that they're read only. </p>



<pre class="wp-block-code"><code>/usr/bin/rrsync -ro /backups/</code></pre>



<p class="wp-block-paragraph">You've probably already added the ssh key to the user's authorized_keys file on  your production server, so &#8230;</p>



<pre class="wp-block-code"><code># Change this
ssh-rsa AAAAAAgasaofasdfndsfasdfablahblah
# To this
command="/usr/bin/rrsync -ro /backups",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa AAAAAAgasaofasdfndsfasdfablahblah</code></pre>



<p class="wp-block-paragraph">On your backup server you might already be running rsync to pull a directory over from your production server. So, </p>



<pre class="wp-block-code"><code># Change this
rsync -avz -e ssh user@production.com:/backups/ /local/backups/production/
# To this
rsync -avz -e ssh user@production.com: /local/backups/production/
</code></pre>



<p class="wp-block-paragraph">It looks wrong, like it will backup the whole server, but basically on the production server end it will only let the backup server 'see' the single directory you specified in authorized_keys. Any other command you try to run from the backup server on the production server will fail. </p>



<p class="wp-block-paragraph">Limitation: As far as I can see, you can only specify a single directory. To do two directories, you'd need to connect with two separate ssh keys and do one directory each time. </p>



<p class="wp-block-paragraph"></p>
]]></content:encoded>
					
					<wfw:commentRss>https://play.datalude.com/blog/2025/09/rrsync-a-hidden-gem/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Google Gemini bad ssh advice locks people out of their servers.</title>
		<link>https://play.datalude.com/blog/2025/08/google-gemini-bad-ssh-advice-locks-people-out-of-their-servers/</link>
					<comments>https://play.datalude.com/blog/2025/08/google-gemini-bad-ssh-advice-locks-people-out-of-their-servers/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 27 Aug 2025 02:21:34 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=756</guid>

					<description><![CDATA[I fell for this one myself, so I present it here as a cautionary tale. I've been running Linux servers for 30 years or so, and have a muscle memory associated with changing sshd ports. Edit sshd_config to change Port=123 , update firewall to allow port 123, restart ssh, test config from another box to ... <a title="Google Gemini bad ssh advice locks people out of their servers." class="read-more" href="https://play.datalude.com/blog/2025/08/google-gemini-bad-ssh-advice-locks-people-out-of-their-servers/" aria-label="Read more about Google Gemini bad ssh advice locks people out of their servers.">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">I fell for this one myself, so I present it here as a cautionary tale. </p>



<p class="wp-block-paragraph">I've been running Linux servers for 30 years or so, and have a muscle memory associated with changing sshd ports. Edit sshd_config to change Port=123 , update firewall to allow port 123, restart ssh, test config from another box to port 123, done. <br><br>Imagine my horror when that didn't work any more. On occasion, having done this hundreds or thousands of times, I might skip the 'test from another box' stage, but I did so this time and was unable to connect. </p>



<p class="wp-block-paragraph">After a lot of searching, I found the culprit was systemd, or rather Ubuntu's decision to move control of the sshd process to there, so that edits to /etc/ssh/sshd_config were ignored. So when you search for things these days on the internet, you get some AI busting in telling you how to fix it. In this case it was Google Gemini, who confidently told me:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="wp-block-paragraph">To change the port or address while keeping socket activation, you must modify the <code>sshd.socket</code> unit's configuration.<sup></sup> The best practice is to create a "drop-in" file to override the default settings without directly editing the main unit file.</p>



<ul class="wp-block-list">
<li>Create a directory: <code>sudo mkdir -p /etc/systemd/system/ssh.socket.d</code></li>



<li>Create a new drop-in file: <code>sudo nano /etc/systemd/system/ssh.socket.d/listen.conf</code></li>



<li>Add the following content, replacing <code>1234</code> with your desired port. The empty <code>ListenStream=</code> line is crucial as it clears the default port <code>22</code>.<code>[Socket] ListenStream= ListenStream=1234</code></li>



<li>Reload the systemd daemon: <code>sudo systemctl daemon-reload</code></li>



<li>Restart the socket: <code>sudo systemctl restart ssh.socket</code></li>



<li>Update your firewall rules to allow the new port.</li>
</ul>
</blockquote>



<p class="wp-block-paragraph">Well the only trouble is, that didn't work. It left my system's sshd listening on ipv6 ONLY. And although the server had ipv6, my connection did not. </p>



<p class="wp-block-paragraph">Luckily I was able to access the server via the host's console and fix it, but this is a massive gotcha for those VM hosts that don't facilitate that. </p>



<p class="wp-block-paragraph">So how to fix it, really Google? Well, if you want to stick with ssh.socket:</p>



<pre class="wp-block-code"><code>sudo systemctl edit ssh.socket
# Add
&#91;Socket]
ListenStream=
ListenStream=0.0.0.0:123
ListenStream=&#91;::]:123

# Restart 
sudo systemctl daemon-reload
sudo systemctl restart ssh.socket

# Verify that sshd is now listening on your new port for both IPv4 and IPv6:
sudo ss -tulpn | grep 123</code></pre>



<p class="wp-block-paragraph">OR, if you want to go back to the way things were, so you don't get confused &#8230;.</p>



<pre class="wp-block-code"><code># Disable and stop the socket unit: <br>sudo systemctl disable --now ssh.socket<br><br>#Enable and start the service unit: <br>sudo systemctl enable --now ssh.service</code></pre>



<p class="wp-block-paragraph">Now you can edit /etc/ssh/sshd_config and simply run sudo systemctl restart ssh.service for changes to take effect.</p>



<p class="wp-block-paragraph">The crazy thing is, if you tell Google that their Recommended method doesn't work, it cheerfully acknowledges it and tells you the correct way to do it. </p>



<p class="wp-block-paragraph">Be careful out there with AI. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://play.datalude.com/blog/2025/08/google-gemini-bad-ssh-advice-locks-people-out-of-their-servers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Bash script for managing htaccess files</title>
		<link>https://play.datalude.com/blog/2025/08/bash-script-for-managing-htaccess-files/</link>
					<comments>https://play.datalude.com/blog/2025/08/bash-script-for-managing-htaccess-files/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 20 Aug 2025 05:19:04 +0000</pubDate>
				<category><![CDATA[Bash Script]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=752</guid>

					<description><![CDATA[Had to create this for a less technical user to manage htaccess, so thought I'd share. Your mileage may vary. Change the location of the default htpass file in the config. You can supply another location in mid script if you have more than one. #!/bin/bash# ConfigurationHTACCESS_FILE_DEFAULT="htpass_test"# Function to display a list of usersdisplay_users() { ... <a title="Bash script for managing htaccess files" class="read-more" href="https://play.datalude.com/blog/2025/08/bash-script-for-managing-htaccess-files/" aria-label="Read more about Bash script for managing htaccess files">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Had to create this for a less technical user to manage htaccess, so thought I'd share. Your mileage may vary. <br><br>Change the location of the default htpass file in the config. You can supply another location in mid script if you have more than one. </p>



<pre class="wp-block-preformatted">#!/bin/bash<br><br># Configuration<br>HTACCESS_FILE_DEFAULT="htpass_test"<br><br># Function to display a list of users<br>display_users() {<br>  echo "--- Existing Users ---"<br>  # Grep for lines that don't start with # and aren't empty<br>  # Then use cut to show only the username before the colon, and number the lines<br>  grep -vE '^(#|$)' "$HTACCESS_FILE" | cut -d':' -f1 | cat -n<br>  echo "----------------------"<br>}<br><br># Function to perform a backup<br>backup_file() {<br>  if [ -f "$HTACCESS_FILE" ]; then<br>    cp "$HTACCESS_FILE" "$HTACCESS_FILE.bak"<br>    echo "Backup created at $HTACCESS_FILE.bak"<br>  else<br>    echo "No .htpasswd file to back up."<br>  fi<br>}<br><br># Function to generate a random password<br>generate_password() {<br>    tr -cd '[:alnum:]' &lt; /dev/urandom | head -c 12<br>}<br><br># --- Main Script ---<br><br># Select the .htpasswd file<br>read -p "Enter .htpasswd file (default: $HTACCESS_FILE_DEFAULT): " HTACCESS_FILE<br>HTACCESS_FILE=${HTACCESS_FILE:-$HTACCESS_FILE_DEFAULT}<br><br># Ensure the file exists, exit if not<br>if [ ! -f "$HTACCESS_FILE" ]; then<br>  echo "Password file doesn't exist"<br>  exit 1<br>fi<br><br># Main menu loop<br>while true; do<br>  echo "Do you want to:"<br>  echo "a) Add a user"<br>  echo "r) Remove a user"<br>  echo "u) Update a user's password"<br>  echo "l) List users"<br>  echo "q) Quit"<br>  read -p "Enter your choice: " choice<br><br>  case "$choice" in<br>    a)<br>      read -p "Enter username to add: " username<br>      # Check if the username already exists<br>      if grep -q "^$username:" "$HTACCESS_FILE"; then<br>        echo "Error: User '$username' already exists. Use the 'u' option to update their password."<br>      else<br>        suggested_password=$(generate_password)<br>        read -p "Enter password for '$username' (or press Enter to use suggested: $suggested_password): " password<br>        password=${password:-$suggested_password}<br>        <br>        backup_file<br>        htpasswd -b "$HTACCESS_FILE" "$username" "$password"<br>        echo "User '$username' added."<br>      fi<br>      ;;<br>    r)<br>      display_users<br>      read -p "Enter reference number of user to remove: " ref<br>      # Use grep and sed to find the line number and get the username<br>      username_to_remove=$(grep -vE '^(#|$)' "$HTACCESS_FILE" | sed -n "${ref}p" | cut -d':' -f1)<br><br>      if [ -z "$username_to_remove" ]; then<br>        echo "Invalid reference number."<br>      else<br>        backup_file<br>        # Create a temp file without the user and then replace the original<br>        grep -v "^$username_to_remove:" "$HTACCESS_FILE" > "$HTACCESS_FILE.tmp" &amp;&amp; mv "$HTACCESS_FILE.tmp" "$HTACCESS_FILE"<br>        echo "User '$username_to_remove' removed."<br>      fi<br>      ;;<br>    u)<br>      display_users<br>      read -p "Enter reference number of user to update: " ref<br>      username_to_update=$(grep -vE '^(#|$)' "$HTACCESS_FILE" | sed -n "${ref}p" | cut -d':' -f1)<br><br>      if [ -z "$username_to_update" ]; then<br>        echo "Invalid reference number."<br>      else<br>        suggested_password=$(generate_password)<br>        read -p "Enter new password for '$username_to_update' (or press Enter to use suggested: $suggested_password): " password<br>        password=${password:-$suggested_password}<br><br>        backup_file<br>        htpasswd -b "$HTACCESS_FILE" "$username_to_update" "$password"<br>        echo "Password for user '$username_to_update' updated."<br>      fi<br>      ;;<br>    l)<br>      display_users<br>      ;;<br>    q)<br>      echo "Exiting."<br>      exit 0<br>      ;;<br>    *)<br>      echo "Invalid option. Please try again."<br>      ;;<br>  esac<br><br>  echo # Add a newline for spacing<br>done</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://play.datalude.com/blog/2025/08/bash-script-for-managing-htaccess-files/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Error reporting settings in php-fpm not recognized</title>
		<link>https://play.datalude.com/blog/2025/04/error-reporting-settings-in-php-fpm-not-recognized/</link>
					<comments>https://play.datalude.com/blog/2025/04/error-reporting-settings-in-php-fpm-not-recognized/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 21 Apr 2025 05:39:52 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Wordpress]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=739</guid>

					<description><![CDATA[I've been frustrated by this one several times. If you run an nginx server with php-fpm you'll be used to setting php values in the php-fpm pool file. That should supercede any values in php.ini So with some values this works: These all work fine. But if you look anywhere on the internet, it tells ... <a title="Error reporting settings in php-fpm not recognized" class="read-more" href="https://play.datalude.com/blog/2025/04/error-reporting-settings-in-php-fpm-not-recognized/" aria-label="Read more about Error reporting settings in php-fpm not recognized">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">I've been frustrated by this one several times. If you run an nginx server with php-fpm you'll be used to setting php values in the php-fpm pool file. That should supercede any values in php.ini<br><br>So with some values this works:</p>



<pre class="wp-block-code"><code>;;; php settings
php_flag&#91;display_errors] = off
php_admin_value&#91;error_log] = /home/bobthebuilder/logs/phpfpm.log
php_admin_flag&#91;log_errors] = on
</code></pre>



<p class="wp-block-paragraph">These all work fine. But if you look anywhere on the internet, it tells you you can also set error_logging there. It confidently tells you to use</p>



<pre class="wp-block-code"><code>php_admin_value&#91;error_reporting] = E_ALL &amp; ~E_WARNING &amp; ~E_NOTICE &amp; ~E_DEPRECATED &amp; ~E_USER_DEPRECATED
</code></pre>



<p class="wp-block-paragraph">&#8230;for example. But this doesn't work! You check and re-check the log files which are scrolling past at nausea-inducing rates. You try altering the values in php.ini, wordpress config files, anywhere you can try. But when you load up phpinfo, it just tells you you're wasting your time. You shout at google gemini a bit, but it tells you the same thing. Maybe you have some formatting errors, it suggests?<br><br>The trick, it seems is to use the numeric values. This works in your phpfpm pool file instead of the command above.</p>



<pre class="wp-block-code"><code>php_admin_value&#91;error_reporting] = 8181</code></pre>



<p class="wp-block-paragraph">That's it. And there's a pretty handy page for calculating the masks <a href="https://maximivanov.github.io/php-error-reporting-calculator/">here</a>. <br><br>Now wouldn't it be nice if PHP had pre-set log levels so all you have to do is put error_log_level = 3 instead of invoking a bitmask calculator &#8230;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://play.datalude.com/blog/2025/04/error-reporting-settings-in-php-fpm-not-recognized/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Updraft Backup settings for Hetzner S3 storage</title>
		<link>https://play.datalude.com/blog/2025/02/updraft-backup-settings-for-hetzner-s3-storage/</link>
					<comments>https://play.datalude.com/blog/2025/02/updraft-backup-settings-for-hetzner-s3-storage/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 01 Feb 2025 08:18:20 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Wordpress]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=736</guid>

					<description><![CDATA[Just a quick one, as I couldn't find the answer around the internet. That's it. Hit Test, and away you go. Backing up from US to Hetzner Germany took about 20 mins for 1.2Gb backup. Slow, but it completed without error.]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Just a quick one, as I couldn't find the answer around the internet. </p>



<ul class="wp-block-list">
<li>In Updraft, chose <strong>S3-Compatible (generic)</strong> storage option</li>



<li><strong>S3 Access key</strong> and <strong>S3 Secret Key</strong> are obvious enough and are as given by Hetzner. Access Key is the smaller of the two. </li>



<li>In <strong>S3 Location</strong>, you just need the bucket name, so that it reads s3generic://mybucketname (i.e. just type mybucketname in the box)</li>



<li>In the <strong>S3 endpoint </strong>box, you want just the domain name of the storage server, eg fsn1.your-objectstorage.com<br></li>
</ul>



<p class="wp-block-paragraph">That's it. Hit Test, and away you go. </p>



<p class="wp-block-paragraph">Backing up from US to Hetzner Germany took about 20 mins for 1.2Gb backup. Slow, but it completed without error. </p>



<p class="wp-block-paragraph"></p>
]]></content:encoded>
					
					<wfw:commentRss>https://play.datalude.com/blog/2025/02/updraft-backup-settings-for-hetzner-s3-storage/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>systemctl journals filling up your disk</title>
		<link>https://play.datalude.com/blog/2020/01/systemctl-journals-filling-up-your-disk/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 29 Jan 2020 08:10:14 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=537</guid>

					<description><![CDATA[Quick one &#8230; your system probably uses logrotate to keep a fixed number of logs, and which stops your disk filling up. Trouble is, that systemctl doesn't write logs in the normal way so you can't rely on logrotate any more. Check out your current systemctl log usage with journalctl --disk-usage Wait, whaaaat? Its using ... <a title="systemctl journals filling up your disk" class="read-more" href="https://play.datalude.com/blog/2020/01/systemctl-journals-filling-up-your-disk/" aria-label="Read more about systemctl journals filling up your disk">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Quick one &#8230; your system probably uses logrotate to keep a fixed number of logs, and which stops your disk filling up. Trouble is, that systemctl doesn't write logs in the normal way so you can't rely on logrotate any more. </p>



<p class="wp-block-paragraph">Check out your current systemctl log usage with </p>



<pre class="wp-block-preformatted">journalctl --disk-usage</pre>



<p class="wp-block-paragraph">Wait, whaaaat? Its using up 4Gb? By default journalctl will take up 15% of  your disk, which seems a little greedy. You can immediately prune some of the logs with the following, which will limit to the size specified 200M, 1G, 500k etc. </p>



<pre class="wp-block-preformatted">journalctl --vacuum-size=500M</pre>



<p class="wp-block-paragraph">That buys you some space immediately on a system which is running out of room. If you want a longer term solution you can edit /etc/systemd/journald.conf and try setting eg.<br></p>



<pre class="wp-block-code"><code>SystemMaxUse=500M
SystemKeepFree=1G</code></pre>



<p class="wp-block-paragraph"></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Letsencrypt Wildcard Certificates, with acme.sh client</title>
		<link>https://play.datalude.com/blog/2018/03/letsencrypt-wildcard-certificates-with-acme-sh-client/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Wed, 14 Mar 2018 09:19:59 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=426</guid>

					<description><![CDATA[Took me a bit of time to figure this out, so I thought I'd make it public. Letsencrypt announced their new wildcard certs, and because I have to add the SSL cert to a load balancer covering many subdomains, I needed to make use of it. First thing to note is that not all clients ... <a title="Letsencrypt Wildcard Certificates, with acme.sh client" class="read-more" href="https://play.datalude.com/blog/2018/03/letsencrypt-wildcard-certificates-with-acme-sh-client/" aria-label="Read more about Letsencrypt Wildcard Certificates, with acme.sh client">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Took me a bit of time to figure this out, so I thought I'd make it public. Letsencrypt announced their new wildcard certs, and because I have to add the SSL cert to a load balancer covering many subdomains, I needed to make use of it.</p>



<p class="wp-block-paragraph">First thing to note is that not all clients support the new v2 API which is required for wildcard certs. I looked at the list of v2 supporting clients on the Letsencrypt site, and chose the <a href="https://github.com/Neilpang/acme.sh">acme.sh bash script</a>. Not sure if I'm going to stick with it at this point but it got me going.</p>



<p class="wp-block-paragraph">First thing you need to do is to run it with the &#8211;issue flag. You'll need to run it with DNS authentication, as that's the supported method for wildcard certs. You'll also need to run it with both the root domain AND the wildcard.
</p>



<span id="more-426"></span>



<pre class="wp-block-code"><code>./acme.sh --log --issue --dns -d mydomain.com -d *.mydomain.com
&#91;Tue Mar 13 23:42:54 MDT 2018] Multi domain='DNS:mydomain.com,DNS:*.mydomain.com'
&#91;Tue Mar 13 23:42:54 MDT 2018] Getting domain auth token for each domain
&#91;Tue Mar 13 23:42:55 MDT 2018] Getting webroot for domain='mydomain.com'
&#91;Tue Mar 13 23:42:55 MDT 2018] Getting webroot for domain='*.mydomain.com'
&#91;Tue Mar 13 23:42:55 MDT 2018] Add the following TXT record:
&#91;Tue Mar 13 23:42:55 MDT 2018] Domain: '_acme-challenge.mydomain.com'
&#91;Tue Mar 13 23:42:55 MDT 2018] TXT value: '123XuRD_z9FHfdGIIQR5HIoNY1kCn7WjqqND2s1Nxyz'
&#91;Tue Mar 13 23:42:55 MDT 2018] Please be aware that you prepend _acme-challenge. before your domain
&#91;Tue Mar 13 23:42:55 MDT 2018] so the resulting subdomain will be: _acme-challenge.mydomain.com
&#91;Tue Mar 13 23:42:55 MDT 2018] Add the following TXT record:
&#91;Tue Mar 13 23:42:55 MDT 2018] Domain: '_acme-challenge.mydomain.com'
&#91;Tue Mar 13 23:42:55 MDT 2018] TXT value: '1233qm_dTj_tHeDljeZOgNywPCVKMrTxWcMHTYkrxyz'
&#91;Tue Mar 13 23:42:55 MDT 2018] Please be aware that you prepend _acme-challenge. before your domain
&#91;Tue Mar 13 23:42:55 MDT 2018] so the resulting subdomain will be: _acme-challenge.mydomain.com
&#91;Tue Mar 13 23:42:55 MDT 2018] Please add the TXT records to the domains, and retry again.
&#91;Tue Mar 13 23:42:55 MDT 2018] Please check log file for more details: /root/.acme.sh/acme.sh.log</code></pre>



<p class="wp-block-paragraph">You'll get a load of output which tells you it failed but gives you the entries you need to put into your DNS Zone editor. So now you need to add TWO text records to your domain:<br>
_acme-challenge.mydomain.com&nbsp;&nbsp;&nbsp;&nbsp;123XuRD_z9FHfdGIIQR5HIoNY1kCn7WjqqND2s1Nxyz<br>
_acme-challenge.mydomain.com&nbsp;&nbsp;&nbsp;&nbsp;1233qm_dTj_tHeDljeZOgNywPCVKMrTxWcMHTYkrxyz</p>



<p class="wp-block-paragraph">You can verify these with dig:
</p>



<pre class="wp-block-code"><code>dig txt _acme-challenge.mydomain.com
;; ANSWER SECTION:
_acme-challenge.mydomain.com. 60 IN TXT	"123XuRD_z9FHfdGIIQR5HIoNY1kCn7WjqqND2s1Nxyz"
_acme-challenge.mydomain.com. 60 IN TXT	"1233qm_dTj_tHeDljeZOgNywPCVKMrTxWcMHTYkrxyz"</code></pre>



<p class="wp-block-paragraph">Now you run the acme client again with the &#8211;renew flag.
</p>



<pre class="wp-block-code"><code>./acme.sh --renew -d mydomain.com -d *.mydomain.com
&#91;Tue Mar 13 23:51:49 MDT 2018] Renew: 'mydomain.com'
&#91;Tue Mar 13 23:51:50 MDT 2018] Multi domain='DNS:mydomain.com,DNS:*.mydomain.com'
&#91;Tue Mar 13 23:51:50 MDT 2018] Getting domain auth token for each domain
&#91;Tue Mar 13 23:51:50 MDT 2018] Verifying:mydomain.com
&#91;Tue Mar 13 23:51:53 MDT 2018] Success
&#91;Tue Mar 13 23:51:53 MDT 2018] Verifying:*.mydomain.com
&#91;Tue Mar 13 23:51:55 MDT 2018] Success
&#91;Tue Mar 13 23:51:55 MDT 2018] Verify finished, start to sign.
&#91;Tue Mar 13 23:51:56 MDT 2018] Cert success.

&#91;Tue Mar 13 23:51:56 MDT 2018] Your cert is in  /root/.acme.sh/mydomain.com/mydomain.com.cer 
&#91;Tue Mar 13 23:51:56 MDT 2018] Your cert key is in  /root/.acme.sh/mydomain.com/mydomain.com.key 
&#91;Tue Mar 13 23:51:56 MDT 2018] The intermediate CA cert is in  /root/.acme.sh/mydomain.com/ca.cer 
&#91;Tue Mar 13 23:51:56 MDT 2018] And the full chain certs is there:  /root/.acme.sh/mydomain.com/fullchain.cer</code></pre>



<p class="wp-block-paragraph">And that's about it. The client has more options to copy the cert into your apache/nginx config, but I had to add it by hand to the load balancer. Maybe I'll investigate automating this, and the cert renewal.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Switch from UFW and fail2ban to CSF</title>
		<link>https://play.datalude.com/blog/2017/11/switch-from-ufw-and-fail2ban-to-csf/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 10 Nov 2017 08:21:21 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=416</guid>

					<description><![CDATA[Having played with CSF for a while on one server, I've decided I like it more than UFW and fail2ban. It seems much better at blocking mail bruteforce attacks and SSH as a distributed attack. So anyway, here's a list of steps to achieve that, as much for my record as anything. The server is ... <a title="Switch from UFW and fail2ban to CSF" class="read-more" href="https://play.datalude.com/blog/2017/11/switch-from-ufw-and-fail2ban-to-csf/" aria-label="Read more about Switch from UFW and fail2ban to CSF">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Having played with CSF for a while on one server, I've decided I like it more than UFW and fail2ban. It seems much better at blocking mail bruteforce attacks and SSH as a distributed attack. So anyway, here's a list of steps to achieve that, as much for my record as anything. The server is running Ubuntu 16.04, but these general steps should work anywhere. In addition the server I did it on is also running VestaCP, so there are a couple more steps for that.</p>



<span id="more-416"></span>



<ol class="wp-block-list"><li><strong>Download and install CSF</strong><br> cd code<br> wget https://download.configserver.com/csf.tgz<br> tar -xzf csf.tgz<br> cd csf<br> install.sh</li><li><strong>Edit Open ports</strong> in /etc/csf/csf.conf to reflect  your environment. csf install will automatically detect ssh running on non-standard ports and add those. It will also tell you during install which ports are listening. Review:<br> TCP_OUT = "20,21,22,25,53,80,110,113,443,587,993,995"<br> TCP_IN = "22,25,80,110,143,443,465"<br> Also TCPV6_OUT and TCPV6_IN.</li><li><strong>Set the following values</strong><br> TESTING = "1"<br> RESTRICT_SYSLOG = "3"<br> RESTRICT_SYSLOG_GROUP = "sysloggers"<br> LF_ALERT_TO = "x@domain.com"<br> LF_ALERT_FROM = "csf@domain.com"<br> LF_DISTATTACK = "1"<br> PT_USERTIME = "1"</li><li><strong>Review log settings</strong> from HTACCESS_LOG onwards. Specifically on Ubuntu, you need to set<br> SSHD_LOG = "/var/log/auth.log"<br> SU_LOG = "/var/log/auth.log"<br> FTPD_LOG = "/var/log/syslog"<br> SMTPAUTH_LOG = "/var/log/secure"<br> POP3D_LOG = "/var/log/mail.log"<br> IMAPD_LOG = "/var/log/mail.log"<br> IPTABLES_LOG = "/var/log/syslog"<br> SUHOSIN_LOG = "/var/log/syslog"<br> BIND_LOG = "/var/log/syslog"<br> SYSLOG_LOG = "/var/log/syslog"<br> WEBMIN_LOG = "/var/log/auth.log"</li><li>You can now<strong> start csf.</strong> It will replace all the UFW rules with its own.<br> ufw disable<br> systemctl disable ufw<br> systemctl disable fail2ban<br> csf -ra</li><li><strong>Archive off fail2ban and remove logrotate config</strong><br> tar -cvf /etc/fail2ban.tar /etc/fail2ban/<br> apt remove fail2ban ufw<br> rm /etc/logrotate.d/fail2ban</li><li><strong>Extra steps for VestaCP</strong><br> In /usr/local/vesta/conf/vesta.conf file.<br> FIREWALL_SYSTEM="<br> FIREWALL_EXTENSION="<br> Install the vesta UI and v-csf script from https://github.com/haipham/csf-vestacp/blob/master/install.sh<br> (prefer to do this manually)</li><li><strong>Final hacking.</strong> Over the next few days you'll need to pay attention to other files in /etc/csf/<br> csf.ignore<br> csf.pignore<br> csf.blocklists<br> csf.allow<br> csf.deny</li><li><strong>Extra aggressive settings for those email bruteforcers.</strong><br> LF_POP3D = 5<br> LF_POP3D_PERM = 86400<br> LF_IMAPD = 5<br> LF_POP3D_PERM = 86400</li><li>Adjust Logwatch as necessary.</li></ol>



<p class="wp-block-paragraph"></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Command to find all image files which are not really image files!</title>
		<link>https://play.datalude.com/blog/2017/05/command-to-find-all-image-files-which-are-not-really-image-files/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 11 May 2017 06:34:03 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[Wordpress]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=396</guid>

					<description><![CDATA[Quick one this &#8230; so you've got a compromised webserver and you want to check the files on it. Many scanning tools will ignore images, but an image might not always be what it seems! Check them all with this command: find /path/to/dir -regex ".*\.\(jpg\&#124;png\&#124;gif\)" -exec file {} \; &#124; grep -i -v "image data" ... <a title="Command to find all image files which are not really image files!" class="read-more" href="https://play.datalude.com/blog/2017/05/command-to-find-all-image-files-which-are-not-really-image-files/" aria-label="Read more about Command to find all image files which are not really image files!">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p class="wp-block-paragraph">Quick one this &#8230; so you've got a compromised webserver and you want to check the files on it. Many scanning tools will ignore images, but an image might not always be what it seems! Check them all with this command:</p>



<pre class="wp-block-preformatted">find /path/to/dir -regex ".*\.\(jpg\|png\|gif\)" -exec file {} \; | grep -i -v "image data"</pre>



<p class="wp-block-paragraph">If all is good, you won't get any output. If your server is seriously borked, then you might see things like this &#8230;</p>



<p class="wp-block-paragraph">./wp-content/uploads/2011/01/22.jpg: HTML document, ASCII text</p>



<p class="wp-block-paragraph">This is a flag that the image is in fact a PHP file. Investigate!</p>



<p class="wp-block-paragraph">If you get this kind of thing,<br>./wp-content/uploads/2011/01/221.jpg: Minix filesystem, V2, 46909 zones</p>



<p class="wp-block-paragraph">its probably a bug in an old version of <em>file,</em> so check your OS version, copy the file to a more recent OS and try again.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Bash script to clean Bots out of Apache Logs</title>
		<link>https://play.datalude.com/blog/2017/04/bash-script-to-clean-bots-out-of-apache-logs/</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Tue, 04 Apr 2017 08:09:34 +0000</pubDate>
				<category><![CDATA[General IT]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">https://play.datalude.com/blog/?p=389</guid>

					<description><![CDATA[If you've ever spent some time looking at webserver logs, you know how much crap there is in there from crawlers, bots, indexers, and all the bottom feeders of the internet. If you're looking for a specific problem with the webserver, this stuff can quickly become a nuisance, stopping you from finding the information you ... <a title="Bash script to clean Bots out of Apache Logs" class="read-more" href="https://play.datalude.com/blog/2017/04/bash-script-to-clean-bots-out-of-apache-logs/" aria-label="Read more about Bash script to clean Bots out of Apache Logs">Read more</a>]]></description>
										<content:encoded><![CDATA[<p>If you've ever spent some time looking at webserver logs, you know how much crap there is in there from crawlers, bots, indexers, and all the bottom feeders of the internet. If you're looking for a specific problem with the webserver, this stuff can quickly become a nuisance, stopping you from finding the information you need. In addition, its often a surprise exactly <strong><em>how much</em></strong> of the traffic your website serves up to these bots.</p>
<p>The script below helps with both these problems. It takes stats of a logfile (apache, but should also work on nginx), makes a backup, counts the number of lines it removes and each kind of bot, and then repeats the new stats at the end. Copy the following into a file eg. log-squish.sh and run with the name of the logfile as an argument. eg cleanlog.sh logfile.log<br />
You'll definitely want to edit the LOCALTRAFFIC bit to fit your needs. You may also want to add bots to the BOTLIST. Run the script once on a sample logfile and then view it to see what bots are left &#8230;</p>
<pre>#!/bin/bash

# Pass input file as a commandline argument, or set it here
INFILE=$1
OUTFILE=./$1.squish
TMPFILE=./squish.tmp

if [ -f $TMPFILE ] ; then 
    rm $TMPFILE
fi

# Check before we go ... 
read -p "Will copy $INFILE to $OUTFILE and perform all operations on the file copy. Press ENTER to proceed ..."

cp $INFILE $OUTFILE

# List of installation-specific patterns to delete from logfiles (this example for WP. Also excluding local IPaddress)
# Edit to suit your environment.
LOCALTRAFFIC=" wp-cron.php 10.10.0.2 wp-login.php \/wp-admin\/ "
echo
echo "-------- Removing local traffic ---------"
for TERM in $LOCALTRAFFIC; do
    TERMCOUNT=$( grep "$TERM" $OUTFILE | wc -l )
    echo $TERMCOUNT instances of $TERM removed &gt;&gt; $TMPFILE
    sed -i  "/$TERM/d" $OUTFILE
done
sort -nr $TMPFILE
rm $TMPFILE

# List of patterns to delete from logfiles, space separated
BOTLIST="ahrefs Baiduspider bingbot Cliqzbot cs.daum.net DomainCrawler DuckDuckGo Exabot Googlebot linkdexbot magpie-crawler MJ12bot msnbot OpenLinkProfiler.org opensiteexplorer pingdom rogerbot SemrushBot SeznamBot sogou.com\/docs tt-rss Wotbox YandexBot YandexImages ysearch\/slurp BLEXBot Flamingo_SearchEngine okhttp scalaj-http UptimeRobot YisouSpider proximic.com\/info\/spider "
echo
echo "------- Removing Bots ---------"
for TERM in $BOTLIST; do
    TERMCOUNT=$( grep "$TERM" $OUTFILE | wc -l )
    echo $TERMCOUNT instances of $TERM removed &gt;&gt; $TMPFILE
    sed -i  "/$TERM/d" $OUTFILE
done
sort -nr $TMPFILE
rm $TMPFILE

echo
echo "======Summary======="

#filestats before
PRELINES=$(cat $INFILE | wc -l )
PRESIZE=$( stat -c %s $INFILE )

#filestats after
POSTLINES=$(cat $OUTFILE | wc -l )
POSTSIZE=$( stat -c %s $OUTFILE )
PERCENT=$(awk "BEGIN { pc=100*${POSTLINES}/${PRELINES}; i=int(pc); print (pc-i&lt;0.5)?i:i+1 }")

echo Original file $INFILE is $PRESIZE bytes and contains $PRELINES lines
echo Processed file $OUTFILE is $POSTSIZE bytes and contains $POSTLINES lines
echo Log reduced to $PERCENT percent of its original size.
echo Original file was untouched.</pre>
<p>And here is a sample output.</p>
<pre>~/temp $ ./log-squish.sh access.log.2017-09-03
Will copy access.log.2017-09-03 to ./access.log.2017-09-03.squish and perform all operations on the file copy. Press ENTER to proceed

-------- Removing local traffic ---------
5536 instances of wp-login.php removed
507 instances of \/wp-admin\/ removed
84 instances of wp-cron.php removed
0 instances of 10.10.0.2 removed

------- Removing Bots ---------
2769 instances of bingbot removed
2342 instances of Googlebot removed
2177 instances of sogou.com\/docs removed
1815 instances of MJ12bot removed
1651 instances of ahrefs removed
1016 instances of opensiteexplorer removed
578 instances of Baiduspider removed
447 instances of Flamingo_SearchEngine removed
357 instances of okhttp removed
295 instances of UptimeRobot removed
122 instances of scalaj-http removed
74 instances of YandexBot removed
60 instances of ysearch\/slurp removed
24 instances of YisouSpider removed
22 instances of magpie-crawler removed
9 instances of linkdexbot removed
7 instances of YandexImages removed
7 instances of SeznamBot removed
5 instances of rogerbot removed
2 instances of tt-rss removed
1 instances of SemrushBot removed
0 instances of Wotbox removed
0 instances of proximic.com\/info\/spider removed
0 instances of pingdom removed
0 instances of OpenLinkProfiler.org removed
0 instances of msnbot removed
0 instances of Exabot removed
0 instances of DuckDuckGo removed
0 instances of DomainCrawler removed
0 instances of cs.daum.net removed
0 instances of Cliqzbot removed
0 instances of BLEXBot removed

======Summary=======
Original file access.log.2017-09-03 is 19395785 bytes and contains 74872 lines
Processed file ./access.log.2017-09-03.squish is 15432796 bytes and contains 54965 lines
Log reduced to 73 percent of its original size.
Original file was untouched.
</pre>
<p>So you can see that around 20% of the traffic on here is crap. And now the log file is much easier to read.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
