Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts

Monday, August 25, 2014

Font issues with BeagleBone Black and ILI9341 TFT display

In my continuing quest to build a really cool digital speedometer for my car I have been experimenting with an Adafruit 2.2" color TFT display. This past weekend I loaded up Ubuntu 14.04 on my BeagleBone Black and wired up the TFT display to it. Adafruit has a python library that works on both the BeagleBone Black and a Raspberry Pi. After trying out the example code I decided I wanted to try using a nicer font than the default one in the example code. The first font I tried seemed to look fine but the second font I tried had the bottom third of the characters not displayed. To figure out which fonts were affected I wrote a python script that cycled through displaying a bunch of fonts on the screen. Here is a video of the results:


As you can see some fonts are affected more than others. A few have over half the line cut off. I started digging into the code that displays the text. I figured out the code is determining the height and width of the text and then turning the text into an image to be displayed on the screen. This is done so text can easily be rotated on the display.

Line number 17 in this snippet of code is where the height and width is determined before making the image.
The Adafruit library is using PIL (Python Image Library) to create an image from the text. Ubuntu 14.04 actually uses a fork of PIL called Pillow. I did some google searches and discovered that the textsize function has a bug that does not account for the font offsets which causes the clipping on some fonts. The Ubuntu 14.04 I installed on my BBB came with Pillow 2.3.0 which was broken. I updated it to latest available package which was Pillow 2.5.3 and it was still broken. I looked at the bug fix on the master branch of Pillow and it was just a small change to one file, PIL/ImageFont.py, so I decided to apply that change to my 2.5.3 install of Pillow.

Here is how I fixed it.

cd /usr/local/lib/python2.7/dist-packages/PIL
sudo vi ImageFont.py

At about line 142 look for the getsize function. Here is what it looked like before the change.




And here it is after the change.


Save the file and then you need to compile it into python byte-code.
sudo pycompile ImageFont.py

This creates an ImageFont.pyc file. Now to test it again.



All fixed! I'm sure that fix to getsize will be pushed out soon so this won't be a problem in the future but until then this will let me continue my experimentation.



Wednesday, June 11, 2014

Ubuntu 14.04 init scripts fail and throw errors



I recently built out my first couple Ubuntu 14.04 servers at work and when my chef scripts tried to run they blew up all over the place. Chef was getting errors when trying to start or restart services like ssh and rsyslog. Looking a little deeper at the errors, Chef was executing init scripts directly and getting back an exit status of 1. For example when Chef tried to restart ssh it was running '/etc/init.d/ssh restart'

That script on Ubuntu 14.04 has no output and exits with a status of 1. I attempted to run the same thing manually from the command line on one of the servers and had the same result, no output and an exit status of 1. I did some searching and found other people are running into this same issue with various other services. I did find the command 'service ssh restart' would do the right thing and not throw an error. Since it seemed like this is an Ubuntu or Debian bug with the start scripts I decided to just modify my Chef scripts to use the service command instead.

By default Chef attempts run the scripts in /etc/init.d when starting and stopping services. The service resource in Chef has some attributes that let you modify how services are started. The attributes start_command, stop_command, restart_command and reload_command let you define an alternate command for these actions. Here are the changes I made to get my Chef scripts working again on Ubuntu 14.04.

Before
service "rsyslog" do
    supports :restart => true
    action [:enable,:start]
end

After
service "rsyslog" do
    restart_command "service rsyslog restart"
    start_command "service rsyslog start"
    supports :restart => true
    action [:enable,:start]
end

This change is backwards compatible with older versions of Ubuntu so I don't have to worry about special casing this just for 14.04 boxes.


[Update 1]
After reading a bit more I'm starting to suspect Ubuntu and/or Debian has purposely deprecated running the scripts in /etc/init.d to force people to use Upstart. Apparently these init scripts have been broken since Ubuntu 13.10.


[Update 2]
@retr0h gave me a cleaner way of accomplishing this:

service "rsyslog" do
  provider Chef::Provider::Service::Upstart
  supports :restart => true
  action [:enable,:start]
end

This does the same thing without having to define each command individually.


[Update 3]
@jtimberman informed me that this problem will be fixed in Chef 11.14. In that version Chef will automatically use Upstart for Ubuntu 13.10 and higher. (Chef support ticket) (Git commit)


Saturday, January 11, 2014

Daemonized rvm ruby tasks using start-stop-daemon

I am very new to Ruby so the solution described in this post might be very obvious to some but I could not find all the parts of this solution in one place. I cobbled together bits and pieces of other peoples startup and capistrano scripts to get this working. My research has also shown there are ruby gems that might handle some of this better but I wasn't trying to reinvent the wheel. Anyhow, on with the show. At work, during our capistrano deployment we have a ruby process that has to be launched in the background. We are using start-stop-daemon to daemonize the process but our use of rvm complicates running rake because the rake binary is stored in the .rvm/gems directory. Normally the path to the rake binary is set from .bashrc and .bash_profile when you log in through a shell but when you execute things from cron, start-up scripts or other non-shell environments the paths don't exist. After some googling and tinkering I finally got something that worked:

task :start_mytask, :roles => :mytask, :except => { :no_release => true } do
  run "RAILS_ENV=#{rails_env} start-stop-daemon --start -b -m -o -d ~/current -p ~/pids/mytask.pid -a /home/ubuntu/.rvm/gems/ruby-1.9.3-p194@global/bin/rake mytask"
end

The downside with this script is I was calling rake for a specific version of ruby. At the time this was good enough to get us by. Fast forward several months and now we are preparing to upgrade to ruby 2.0. Our existing start script needed to be updated to use ruby 2.0 but I wanted to find a better way that wouldn't need to be tweaked each time we upgrade ruby. My first shot at updating the script I switched to executing rvm and calling 'bundle exec rake':

task :start_mytask, :roles => :mytask, :except => { :no_release => true } do
   run "RAILS_ENV=#{rails_env} start-stop-daemon --start -b -m -o --chdir ~/current --pidfile ~/pids/mytask.pid --exec ~/.rvm/bin/rvm -- current do bundle exec rake mytask"
end

This eliminates directly calling rake for a specific version of ruby but introduces a different issue. Executing it this way causes rvm to launch in a bash shell which then executes the rake task. start-stop-daemon creates a pid file based off of the first process started which is the bash process not the rake task. So when you try to stop it start-stop-daemon tries to kill the bash process which won't end because it has a child process running the rake task. I was getting closer but it wasn't perfect. More googling ensued and I discovered I could use rvm-exec instead of rvm. The rvm-exec command fixes the bash shell problem. It was created specifically for calling rvm in scripts. Here is what I finally came up with:

task :start_mytask, :roles => :mytask, :except => { :no_release => true } do
  run "RAILS_ENV=#{rails_env} start-stop-daemon --start -b -m -o --chdir ~/current --pidfile ~/pids/mytask.pid --exec ~/.rvm/bin/rvm-exec -- current bundle exec rake mytask"
end

It will now run independent of a ruby version number and since only one process is launched the pid file is created correctly. One other benefit I realized is now it is much easier to monitor this process. Previously the process was listed as just "rake". If you have more than one running that would get tricky to monitor. Launching it with this new script the process is listed as "rake mytask".

Finally to stop the daemonized rake task run this command:

task :stop_mytask, :roles => :mytask, :except => { :no_release => true } do
   run "start-stop-daemon -o -p ~/pids/mytask.pid --stop"
end


[UPDATE 01/13/2014] I realized today after doing more testing that I should be calling 'current bundle exec rake' instead of 'default bundle exec rake'. When you are upgrading ruby versions it doesn't necessarily mean you want to change the default ruby version in rvm. Using 'current' will cause the script to use whatever ruby is specified in the .rvmrc. I have edited the above scripts to reflect my new findings.

Tuesday, December 31, 2013

Simple way to integrate Nagios with Slack messaging

At work we recently switched messaging applications from Skype to a new platform called Slack. Slack just launched in August 2013. I have read it is similar to Campfire but I've never used that platform so I can't really comment on that but it is much more useful than a basic chat client like Skype. With Slack you can share files, easily search message history for text or files and integrate with 3rd party applications. Plus it is private for just your team or company. Slack has quite a few preconfigured integrations plus the ability to create your own custom integrations. First we setup the Github integration which allows all of our commit messages to dump into a channel. Next we setup the Trello integration to dump card changes from our main board into another channel. Then I went to setup the Nagios integration and ran into problems. They have a prebuilt integration for Nagios but I could not get it to work. It would post alert messages into the channel but the messages contained no information:


I mucked with their provided perl script quite a bit but I simply could not get it to work. It just kept posting empty messages. Being impatient and a do-it-yourselfer I set about trying to find another way to accomplish this. I looked through the list of integrations and noticed that they had a custom one called Incoming WebHooks which is an easy way to get messages from external sources posted into Slack. The simplest way to utilize Incoming WebHooks is to use curl to post the message to Slack's API. I wrote a little bash script that provides a detailed Nagios alert, a link back to the Nagios web page and conditional emoji's! Each warning level (OK, WARNING, CRITICAL and UNKNOWN) has it's own emoji icon. Here are some example messages in my Slack client:


Here is my bash script that posts to Slack. I placed it in /usr/local/bin

Here are the Nagios config lines that are added to commands.cfg

And finally lines I added to contacts.cfg

I'm not sure why Slack's prebuilt Nagios integration didn't work for me but I really like what I came up with. No Perl modules to install and the only outside dependency is curl. It's also pretty easy to modify the info in the alert message by adding or removing NAGIOS_ env variables in the curl statement.

Monday, November 4, 2013

Upgrading existing Solr installation to new version of Jetty

At work we have been running into a problem with Apache Solr crashing. Depending on how much it was used we would get several weeks of usage out of it before it crashed. Now it is only running for five days at a time. So this fire has started burning hot enough to be at the top of my to-do list.
When it crashes it throws errors saying "Too many open files". Running lsof showed it wasn't actually open files but thousands of orphaned sockets left open. The sockets looked like this in the lsof output:

java 2428 root 2173u sock 0,7 0t0 123291433 can't identify protocol

There won't be anything listed in netstat. These sockets don't have open connections to anything. The Solr log file will start showing errors similar to this:

SEVERE: java.io.FileNotFoundException: /usr/local/apache-solr-3.5.0/example/solr/data/index/_dgf.frq (Too many open files)

SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!!

SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@./solr/data/index/write.lock

Initially we dealt with this problem by monitoring the number of open files for the java process and running a reindex when it got close to the limit. Not a great solution but at the time there weren't enough hours in the day to put a bunch of effort into figuring this out. In my case the limit when Solr blew up was 4000 open sockets. Once Solr had that many sockets open it would just throw 500 errors.

Usually the answer to a situation like this is upgrade Solr to a newer version. Unfortunately I couldn't do that in this case because we have a ruby gem that is dependent on Solr version 3.5. My research pointed to Jetty as the source problem and not Solr. Once I found this post I knew for sure Jetty was causing the orphaned sockets. Solr 3.5.0 is packaged with Jetty 6.1.26 which has a bug that causes the orphaned sockets under certain conditions. Because Jetty 6 is fairly old the developers are not going to fix it. At this point I set about upgrading Jetty to version 7.

The first thing I had to figure out was what stuff was Solr and what stuff was Jetty. Turns out most of the package is Jetty. Solr is contained in apache-solr-3.5.0/example/solr and apache-solr-3.5.0/example/webapps/solr.war. So I decided to try and stuff Solr 3.5.0 into Jetty 7.6.13. Later I may try moving to the latest version of Jetty 9 but I'm just trying to solve this orphaned socket problem right now and was worried the older version of Solr might have problems with a newer Jetty.

Upgrading Jetty

Here are the steps I took to upgrade Solr 3.5.0 to Jetty 7

Download latest Jetty 7 (jetty-distribution-7.6.13.v20130916.tar.gz at the time this was written) from here http://download.eclipse.org/jetty/7.6.13.v20130916/dist/

Untar jetty-distribution-7.6.13.v20130916.tar.gz
tar xfvz jetty-distribution-7.6.13.v20130916.tar.gz

Create destination directory for all the new files
mkdir /usr/local/apache-solr-3.5.0-jetty-7.6.13
mkdir /usr/local/apache-solr-3.5.0-jetty-7.6.13/example

copy the contents of jetty-distribution-7.6.13.v20130916 to new directory
cp -a jetty-distribution-7.6.13.v20130916/* /usr/local/apache-solr-3.5.0-jetty-7.6.13/example

Copy solr files from old solr installation to new Jetty directory
cp -a /usr/local/apache-solr-3.5.0/example/solr  /usr/local/apache-solr-3.5.0-jetty-7.6.13/example
cp -a /usr/local/apache-solr-3.5.0/example/webapps/solr.war /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/webapps/

Edit the jetty.xml config file to change the listening port
vi usr/local/apache-solr-3.5.0-jetty-7.6.13/example/etc/jetty.xml
Change this line
 <Set name="port"><Property name="jetty.port" default="8080"/></Set>
To this
 <Set name="port"><Property name="jetty.port" default="8983"/></Set>


At this point solr will run but there are some example war files and config files that aren't needed for Solr and should be cleaned up. 

- Edit /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/start.ini
   vi /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/start.ini
   Comment out the line
   etc/jetty-testrealm.xml
   so it reads 
   #etc/jetty-testrealm.xml

- Clean up example war files
  cd /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/webapps
  mkdir BAK
  mv test.war spdy.war BAK

- Clean up example config files
  cd /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/etc
  mkdir BAK
  mv jetty-spdy.xml jetty-spdy-proxy.xml jetty-testrealm.xml BAK
  cd /usr/local/apache-solr-3.5.0-jetty-7.6.13/example/contexts
  mkdir BAK
  mv test.xml BAK

I use a symbolic link for the installation directory so the start script doesn't have to be modified. Before restarting I have to switch that sym link.
  service solr stop
  cd /usr/local
  rm solr
  ln -s apache-solr-3.5.0-jetty-7.6.13 solr
  service solr start

Then you can test hitting the service locally.
  curl localhost:8983/solr/

it should return html that says something like this:
  <title>Welcome to Solr</title>
  </head>

  <body>
  <h1>Welcome to Solr!</h1>

You will probably need to run a reindex if transactions have been taking place while solr was down for the upgrade.

Resources used to compile this post
http://comments.gmane.org/gmane.comp.ide.eclipse.jetty.user/919
https://github.com/umars/jetty-solr
http://stackoverflow.com/questions/6425759/how-to-upgrade-update-the-solr-jetty-ubuntu-package
https://jira.codehaus.org/browse/JETTY-1458
http://grokbase.com/t/lucene/solr-user/123e6et8e0/too-many-open-files-lots-of-sockets

Saturday, May 25, 2013

Monitor S3 file ages with Nagios


I have started using Amazon S3 storage for a for a couple different things like static image hosting and storing backups. My backup scripts tar and gzip files and then upload the tarball to S3. Since I don't have a central backup system to alert me of failed backups or to delete old backups I needed to handle those tasks manually. S3 has built in lifecycle settings which I do utilize but as with everything AWS it doesn't always work perfectly. As for alerting on failed backups I decided to handle that by watching the age of the files stored in S3 bucket. I ended up writing a Nagios plugin that can monitor both the minimum and maximum age of files stored in S3. In addition to monitoring the age of backup files I think this could also be useful in monitoring the age of files if you use an S3 bucket as a temporary storage area for batch processing. In this case old files would indicate a missed file or possibly a damaged file that couldn't be processed.

I wrote this my favorite new language Python and used the boto library to access S3. The check looks through every file stored in a bucket and checks the file's last_modified property against the supplied min and/or max. The check can be used for either min age, max age or both. You will need to create a .boto file in the home directory of the user executing the Nagios check with credentials that have at least read access to the S3 bucket.

The check_s3_file_age.py file is available on my github nagios-checks repository here: https://github.com/matt448/nagios-checks.

To use this with NRPE add an entry something like this:

command[check_s3_file_age]=/usr/lib/nagios/plugins/check_s3_file_age.py --bucketname myimportantdata --minfileage 24 --maxfileage 720

Here is output from --help:

./check_s3_file_age.py --help

usage: check_s3_file_age.py [-h] --bucketname BUCKETNAME
                            [--minfileage MINFILEAGE]
                            [--maxfileage MAXFILEAGE] [--listfiles] [--debug]

This script is a Nagios check that monitors the age of files that have been
backed up to an S3 bucket.

optional arguments:
  -h, --help            show this help message and exit
  --bucketname BUCKETNAME
                        Name of S3 bucket
  --minfileage MINFILEAGE
                        Minimum age for files in an S3 bucket in hours.
                        Default is 0 hours (disabled).
  --maxfileage MAXFILEAGE
                        Maximum age for files in an S3 bucket in hours.
                        Default is 0 hours (disabled).
  --listfiles           Enables listing of all files in bucket to stdout. Use
                        with caution!
  --debug               Enables debug output.


I am a better sys admin than I am a programmer so please let me know if you find bugs or see ways to improve the code. The best way to do this is to submit an issue on github.

Here is sample output in Nagios

Saturday, March 30, 2013

Compiling libhid for Raspbian Linux on a Raspberry Pi


My son and I are working on a project using a Raspberry Pi and I needed to be able to talk to a USB HID device. This requires a software library called libhid but unfortunately it is not available as a package on Raspbian linux. I downloaded the source and attempted to compile it but ran into an error:

lshid.c:32:87: error: parameter ‘len’ set but not used [-Werror=unused-but-set-parameter]
cc1: all warnings being treated as errors
make[2]: *** [lshid.o] Error 1
make[2]: Leaving directory `/root/libhid-0.2.16/test'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/libhid-0.2.16'
make: *** [all] Error 2

After some googling I found a couple others on the Raspberry Pi forums that ran into the same problem. One of the commenters came up with a simple fix that requires a quick edit of the source code. In ~/libhid-0.2.16/test you need to edit the file lshid.c

Here is the code before making the edit:

39 /* only here to prevent the unused warning */
40 /* TODO remove */
41 len = *((unsigned long*)custom);
42
43 /* Obtain the device's full path */


Here is the code after the edit.
You need to comment out line 41 and then add len = len; and custom = custom;

39 /* only here to prevent the unused warning */
40 /* TODO remove */
41 //len = *((unsigned long*)custom);
42 len = len;
43 custom = custom;
44
45 /* Obtain the device's full path */

After editing the file simply run configure, make and make install like normal. The library will be put into /usr/local. Make sure you run sudo ldconfig before trying to compile any software that uses libhid. Thanks Raspberry Pi forums!

Monday, March 18, 2013

Template Nagios check for a JSON web service

I wrote two different custom Nagios checks for work last week and realized I could make a useful template out of them. After writing the first check I was able to reuse most of the code for the second check. The only changes I had to make had to do with the data returned. So I decided to make this into a generic template that I can reuse in the future. The check first verifies that the web service is responding correctly and then checks various data returned in JSON format.  While writing this template I found this really cool service (www.jsontest.com) that let me code against a service available to anyone who wants to try out this Nagios check before customizing it. This is the first time I have used Python's argparse function and I have to say it is fantastic. It makes adding command line arguments very easy and the result is professional looking.

My github repo can be found here: https://github.com/matt448/nagios-checks

Here is the code in a gist:

Wednesday, March 13, 2013

Nagios file paths on Ubuntu and simple backup script


This is more of a note to myself than anything else but might be helpful to others. Here are the config and data directories for Nagios when installed using packages on Ubuntu 12.04.


Config files
----------------------
/etc/nagios3/
/etc/nagios3/conf.d
/etc/nagios-plugins/config
/etc/nagios

Plugin executables
---------------------
/usr/lib/nagios/plugins

Graphing (pnp4naigos)
----------------------
/usr/share/pnp4nagios/html
/var/lib/pnp4nagios/perfdata

Other
-----------------------
/var/lib/nagios
/var/lib/nagios3



Here is a very simple backup script for Nagios on Ubuntu 12.04

Monday, January 28, 2013

Relaying Postfix through AuthSMTP on an alternate port


AuthSMTP is an authenticated SMTP relay service that you can use with web applications or any situation you need to send outbound e-mail. Because it is an authenticated service it is a little trickier to configure Postfix to relay through their service. I found this post really helpful in configuring the sasl options but one thing I couldn't find a clear answer on was how to use a port other than 25 for the relay host. AuthSMTP offers alternative ports (23, 26, 2525) for SMTP because some ISP's block port 25. To use an alternative port just put a colon after the host name and add the port number. Like this (in main.cf):

relayhost = mail.authsmtp.com:2525


The entry in your sasl-passwords file must match the relayhost name like this:

mail.authsmtp.com:2525 username:secretpassword


Just a quick tip about something that wasn't obvious to me and hopefully this helps out someone else.



Saturday, March 17, 2012

Detailed logging for chrooted sftp users


At work we have been migrating some of our customers from ftp to sftp. This gives us and the customer better security but one drawback with my initial sftp setup was that we didn't have detailed logs like most ftp servers produce. All we were getting in the logs were records of logins and disconnects. We didn't have any information on what a client was doing once they were connected. Things like file uploads, file downloads, etc. I had some time this morning to take a look at this. I started with doing some google searches for 'sftp logging'.

I found a lot of blog posts saying that all you had to do was change this line in sshd_config:
ForceCommand internal-sftp
to:
ForceCommand internal-sftp -l VERBOSE
I tried this but didn't get any additional logging. What I finally figured out is that the logging setup for chrooted sftp is a bit more involved. I ran across this blog which spells out what needs to be done quite clearly. The meat of the problem is that the chrooted sftp process can't open /dev/log because it is not within the chrooted filesystem. An additional layer of complexity is that my sftp home directories exist on an NFS mount. Here are the steps from bigmite.com's blog that I used for my CentOS system.

1. Modify /etc/ssh/sshd_config
Edit /etc/ssh/sshd_config and add -l VERBOSE -f LOCAL6 to the internal-sftp line.
Match group sftpuser
 ChrootDirectory /sftp/%u
 X11Forwarding no
 AllowTcpForwarding no
 ForceCommand internal-sftp -l VERBOSE -f LOCAL6

2. Modify the syslog configuration
If the users sftp directory is not on the root filesystem syslog will need to use an additonal logging socket within the users filesystem. For example /sftp is the seperate sftp filesystem (like my setup with the sftp home directories on an NFS mount). For syslog on Redhat/CentOS edit /etc/sysconfig/syslog so that the line:
SYSLOGD_OPTIONS="-m 0"
reads:
SYSLOGD_OPTIONS="-m 0 -a /sftp/sftp.log.socket"
To log the sftp information to a separate file the syslog daemon needs to be told to log messages for LOCAL6 to /var/log/sftp.log. Add the following to /etc/syslog.conf:
#For SFTP logging
local6.* /var/log/sftp.log
Restart syslog with the command service syslog restart. When syslog starts up it will create the sftp.log.socket file.


3. Create links to the log socket
Now you will need to create a link in each users chrooted home directory so the chrooted sftp process can write to the log. This will also need to be done everytime you create a new user.
mkdir /sftp/testuser1/dev
chmod 755 /sftp/testuser1/dev
ln /sftp/sftp.log.socket /sftp/testuser1/dev/log


And that's it! Now sftp will log everything an sftp user does while connected to your server. Here is a sample of what the logs look like:

Mar 16 15:36:45 sftpsrvname internal-sftp[2449]: session opened for local user sftpusername from [192.168.1.10]
Mar 16 15:36:45 sftpsrvname internal-sftp[2449]: received client version 3
Mar 16 15:36:45 sftpsrvname internal-sftp[2449]: realpath "."
Mar 16 15:37:13 sftpsrvname internal-sftp[2449]: lstat name "/"
Mar 16 15:37:13 sftpsrvname internal-sftp[2449]: lstat name "/"
Mar 16 15:37:13 sftpsrvname internal-sftp[2449]: opendir "/"
Mar 16 15:37:13 sftpsrvname internal-sftp[2449]: closedir "/"
Mar 16 15:37:21 sftpsrvname internal-sftp[2449]: realpath "/backup"
Mar 16 15:37:21 sftpsrvname internal-sftp[2449]: stat name "/backup"
Mar 16 15:37:33 sftpsrvname internal-sftp[2449]: lstat name "/backup"
Mar 16 15:37:33 sftpsrvname internal-sftp[2449]: lstat name "/backup/"
Mar 16 15:37:33 sftpsrvname internal-sftp[2449]: opendir "/backup/"
Mar 16 15:37:33 sftpsrvname internal-sftp[2449]: closedir "/backup/"
Mar 16 15:37:37 sftpsrvname internal-sftp[2449]: open "/backup/testfile" flags WRITE,CREATE,TRUNCATE mode 0664
Mar 16 15:37:37 sftpsrvname internal-sftp[2449]: close "/backup/testfile" bytes read 0 written 288
Mar 16 15:41:45 sftpsrvname internal-sftp[2449]: lstat name "/backup"
Mar 16 15:41:45 sftpsrvname internal-sftp[2449]: lstat name "/backup/"
Mar 16 15:41:45 sftpsrvname internal-sftp[2449]: opendir "/backup/"
Mar 16 15:41:45 sftpsrvname internal-sftp[2449]: closedir "/backup/"
Mar 16 15:42:16 sftpsrvname internal-sftp[2449]: lstat name "/backup/testfile"
Mar 16 15:42:16 sftpsrvname internal-sftp[2449]: remove name "/backup/testfile"
Mar 16 15:42:24 sftpsrvname internal-sftp[2449]: session closed for local user sftpusername from [192.168.1.10]