Blog

Blog Archives - 19 Record(s)

Year: '2014' - Month: '1'

31
January 2014

Gavin Pickin

Dev Ops - From Mail Logs to DB Stats in a CFML Dashboard - Part 3

Server Admin

This is our third post in this mini series. You can read the first and second posts here. We know what we want out of the log files… so lets setup the process of getting the logs moved, and get the "goods" out of them… and in a position for our CFML Agent to do its job. We want to keep separation of duties on the servers, so we have Server A with Mail running, and the log files… and we want to move them to Server B… massage them, and prep them for CFML. So lets setup our cron jobs on Server A.

Server A

First we want to setup a job to move our log files that are archived. By default, the normal mail log file is maillog, and as they are archived, they get a dash then datestamp added to it. So knowing this, we can use the following command to move them to our MaillogAgent Home Directory

cp -f /var/log/maillog-2* /home/maillogagent/

 

Next, we need to change the permissions, as by default, the maillogs are very locked down, so lets change the owner, and then the permissions.

chown maillogagent. maillogagent /home/maillogagent/maillo*
chmod 775 /home/maillogagent/maillo*

 

Once we have processed one of these files, we want to make sure we don't process it again, so we'll move the log into a subfolder for future use.

mv /var/log/maillog-2* /var/log/maillogs/

 

So put it altogether, and we have our cronDailyMaillog.sh file.

#!/bin/bash

cp -f /var/log/maillog-2* /home/maillogagent/
chown maillogagent.maillogagent /home/maillogagent/maillo*
chmod 775 /home/maillogagent/maillo*
mv /var/log/maillog-2* /var/log/maillogs/

 

Our cronHourlyMaillog.sh is a little simpler, it merely copies the current maillog to the same folder, so we can import the newest update hourly, to keep up on those trying to cause trouble.

#!/bin/bash

cp -f /var/log/maillog /home/maillogagent/
chown maillogagent.maillogagent maillog

 

Make sure you give the scripts execution permissions with chmod +x cron*
Lets add them both to our crontab. To edit your crontab use the following command

crontab -e

This opens a vi editor of your cron tasks. 
Press "i" to go into edit mode, and add the following lines

00 */1 * * * /pathtoyourshells/cronHourlyMaillog.sh
00 2 */1 * * /pathtoyourshells/cronDailyMaillog.sh

The first line states we want our Hourly job to run every hour on the hour.
The second line states we want our Daily job to run at 2am every day.

To leave edit mode, hit esc. Then you can navigate, or run commands in vi. 
To quit without saving, type :q! which without warning
To quit type :q if you have changes, it will warn you that you cannot quit without saving, or forcing quit with :q!
To write the changes, type :w 
Once you have written your changes, you can use :q to quit safely.

Cron is a great tool, and obviously I have barely touched anything here. Maybe I will write more sometime, but there are plenty of great resources out there to learn how to configure cron.

Cron does not need to be restarted for the changes to take effect, as it is always looking for crontab changes.

 

Server B

Again, we're working on two separate processes, an hourly process, to pull the latest and greatest logs, so we can crunch the newest data possible, and we want to pull in the weekly archives, checking daily (I could check weekly if I figure out what day its archived on) and checking that weekly archive for any leftover logs, and then saving the log file for longer term keeping and analysis if we want it.

So lets build our cron shells.

cronDailyMaillog.sh

First, we need to use scp to copy the file from Server A to Server B. With scp I always pull the file, thats why this task is on Server B. Since our maillogagent has the files in its home directory, it makes permissions pretty simple, we just setup SSH Keys for each login (no password required for the scp) and use the following command

scp maillogagent@servera:maillog-2* /home/maillogagent/

 

  • scp is the command, secure copy over ssh.
  • maillogagent@ is what user are we connecting as, our ssh user is maillogagent in this case
  • @servera is the server we're connecting to just like normal ssh
  • : the colon separates the ssh authentication, with the file. 
  • maillog-2* gets us anything that matches maillog-2* in the home directory of the user we're ssh-ing with, in this case, its giving us all the maillog-2* files in /home/maillogagent/ on Server A
  • /home/maillogagent/ is the destination on Server B we want the file copied to.

1 command, a lot of explanation, but pretty easy. Using SSH keys for server access is very handy, I barely covered SSH Keys in the Source Control Series for using SSH Keys with Bit Bucket, but not with SSH Server Access, I will try to post a how to on that shortly, because if you are not using them, you probably should be.

Next, we need to make sure our permissions on the file are right.
Since we'll be sharing it with our cfml engines, I'm going to change the group ownership to the group that has that access.

chown maillogagent.webserver /home/maillogagent/*
chmod -R 775 /home/maillogagent/*

 

Once we have set permissions, we're going to use an SSH command, to move the file on Server A into a different folder on Server A so we don't process the same file later. When we look for files, we never look recursively, so moving the file into a subfolder is perfect in this case.

ssh maillogagent@servera 'mv /home/maillogagent/maillog-2* /home/maillogagent/movedMaillogs'

 

We use ssh command, connect as maillogagent@servera and then run the command in quotes… just like we were on that machine. It simply moves any maillog-2* file in the maillogagent home dir into the movedMaillogs folder, so we know we have already moved it from Server A to Server B and stops us from repeating this in the future.

Next, we want to grep the file, and process the log file, that our cfml engine (ColdFusion or Railo) will process.

grep 'CHKUSER' maillog-2* | grep -v 'sender:' > clean_maillog.log

 

I assume you know grep, but quickly, we're gripping the maillog-2* files for 'CHKUSER' then we grep the result of the first grep and we ask for anything that DOES NOT contain 'sender:' and then save the results into clean_maillog.log

Once we have grep'd the file, then we want to move the maillog-2* log file into another subdirectory, so we don't re-process this file over and over again either. 

mv /home/maillogagent/maillog-2* /home/maillogagent/processedLogs

 

So our final cronDailyMaillog.sh file should look like this.

#!/bin/bash

scp maillogagent@servera:maillog-2* /home/maillogagent/
chown maillogagent.webserver /home/maillogagent/*
chmod -R 775 /home/maillogagent/*
ssh maillogagent@servera 'mv /home/maillogagent/maillog-2* /home/maillogagent/movedMaillogs'
grep 'CHKUSER' maillog-2* | grep -v 'sender:' > clean_maillog.log
mv /home/maillogagent/maillog-2* /home/maillogagent/processedLogs
chown maillogagent.webserver /home/maillogagent/*
chmod -R 775 /home/maillogagent/*

 

Save the file, chmod +x cronDailyMaillog.sh and that file is ready to go.

Next, the hourly cron. This one is pretty similar, except, we don't move the file on Server A, and we don't move the file on Server B. The files keep being overwritten, and just keep on getting processed over and over. The only difference is, when we grep the file, we want to grep it to a different filename, just so we don't have conflicts with the daily cron process, so we'll add an H to the end of the file name. So it looks something like this.

#!/bin/bash

scp maillogagent@servera:maillog /home/maillogagent/
chown maillogagent.webserver /home/maillogagent/*
chmod -R 775 /home/maillogagent/*
grep 'CHKUSER' maillog | grep -v 'sender:' > clean_maillogH.log
chown maillogagent.webserver /home/maillogagent/*
chmod -R 775 /home/maillogagent/*

Now, chmod +x cronHourlyMailllog.sh and then we can add them to our cron service.

crontab -e

 

Click i to edit mode, and we'll add the following 2 lines.

05 */1 * * * /pathtoyourshells/cronHourlyMaillog.sh
00 4 * * * /pathtoyourshells/cronDailyMaillog.sh

 

Press esc, then :w enter and then :q enter to save and exit.

The hourly cron job will run hourly, at 5 minutes after the hour. The Server A task runs on the hour, so 5 minutes should give Server A plenty of time to complete its job, and then Server B hourly will start at 5 on the hour.

The daily cron job will run daily, at 4 hours past midnight, or 4am every morning. The daily cron job runs at 2am on Server A, giving it more than enough time. I plan to see when the archiving actually takes place, and then edit the cron jobs to run only 1 time a week, right after the archive is complete, but for the time being, this works.

Now, we are all set. Server A moves the maillog hourly, and daily it looks for new archived (old archived are moved to a subfolder so they're not accidentally caught with the wildcard * file name) maillogs, and moves them. Server B secure copies them over, moves the original file on Server A into a subfolder, and then greps the files, and moved the archived on Server B into a subfolder 2. Everything is ready for our cfml engine to schedule a task to look for those clean log files, and then do something with them.

I think that is enough for this post, check back, when we get into MySql, ColdFusion / Railo Processing, and we're getting closer and closer to those pretty graphs in our Web Management Dashboard.

Thanks for following along, 

Gavin

30
January 2014

Gavin Pickin

Dev Ops - From Mail Logs to DB Stats in a CFML Dashboard - Part 2

Server Admin

In the first post in this series, I mentioned that lately I have had a lot more Dev Ops duties than in the past. I am sharing my process as I try and solve a problem, keeping up with our qmail Mail Logs, to look for potential risky behavior, because our Servers get blacklisted. I'm using a few technologies, with the end goal, getting the log data in a useable format for ColdFusion / Railo CFML to crunch the data, and make it look pretty in a CFML Dashboard. Read the first post here to get up to speed.

We left off looking at 2 lines in the logs

Jan 28 21:32:14 independence smtp-mx: 1390973534.709575 CHKUSER rejected relaying: 
from <sysadmin@salvex.com::> remote <salvex.com:quicksmtp.salvex.com:204.232.190.243> 
rcpt <username@thedomain.com> : client not allowed to relay

 

Someone trying to use us to relay to one of our Internal Email Accounts… note the smtp-mx.
Compared to below, which is smtp, someone trying to use our server to relay to an external address.

Jan 28 21:19:10 independence smtp: 1390972750.741663 CHKUSER rejected relaying: 
from <oltqs@oneofourdomains.com::> remote <gruporga-be3b0e:host51-70.brs.com.br:177.11.51.70> 
rcpt <teste3.pop3@hotmail.com> : client not allowed to relay

 

So, to make this process successful, we need to identify the other lines of the log files that we are interested in. I ran a grep over 5-6 weeks of logs, and I found almost 10 million lines in the logs. A lot of that is garbage, that won't need, so lets keep diving in and see what else we can find.

There seem to be a lot of lines like the following

Dec 29 17:28:26 independence smtp-mx: 1388366906.766843 CHKUSER accepted sender: 
from <wjd@kamescam.com::> remote <lys.kamescam.com:pc93.pointedcoach.com:68.168.30.93> 
rcpt <> : sender accepted
Dec 29 17:27:33 independence smsp: 1388366853.473529 CHKUSER accepted sender: 
from <support@aaaaaaaaa.com:bbbbbbbbb@cccccccc.com:> remote <eeeeeee.dddddd.com:76.1.2.3> 
rcpt <> : accepted any sender always

 

Looking at the patterns, it looks like the system uses CHKUSER accepted/rejected and rcp or sender. The sender lines seem to just decide if the sender is legit or not, but the real action is where the rcp is. So, I think we can ignore any lines with 'sender:' in them.

Lets grep that, and see what we have left.

Dec 29 17:26:47 independence smsp: 1388366807.747284 CHKUSER relaying rcpt: 
from <aaaaaa@bbbbbbb.com:ccccccc@ddddddd.net:> remote <76.1.2.3> 
rcpt <xxxxxxx@gmail.com> : client allowed to relay

 

Looking at this line, we see "client allowed to relay", so lets try and see why. The user is sending as aaaaaa@bbbbbbb.com, and is authenticating as ccccccc@ddddddd.net… those are both our domains, so this is a successful authentication. The user is sending as a user other than themselves though, and after further study, cccccc@dddddddd.net is one of the authenticating emails used by one of our cfml servers. 

Lets look for some CHKUSER relaying rcpt: where the sender and the authenticating email is the same.

Dec 29 17:02:20 independence smtp: 1388365340.829794 CHKUSER relaying rcpt: 
from <alert@aaaaaaaaaa.com:alert@aaaaaaaaaa.com:> remote <alert:50-202-118-46-static.hfc.comcastbusiness.net:50.202.118.46> 
rcpt <ddddddd@hotmail.com> : client allowed to relay

 

This line shows a relaying success, with the name sender as the authenticator. The service is not smtp not smsp, so maybe that is a factor in distinguishing them… but the rest of the log looks the same.
Lets look and see if we can get CHKUSER relaying without the 'sender:' and with no authenticating email address '::> remote' and when we grep that, what do we get.

Dec 30 18:26:49 independence smtp: 1388456809.555333 CHKUSER relaying rcpt: 
from <xxxxx@yyyyyy.com::> remote <www.ourmailserver.com:localhost:127.0.0.1> 
rcpt <aaaaaaa@bbbbbb.com> : client allowed to relay

 

So now we have identified what counts as a successful authenticated message, but based on being from webmail.

It seems like every match we have, includes the CHKUSER… without "sender:". Are we missing anything? Lets run a grep on the logs excluding CHKUSER and see what we have. I'll save you looking at a pile of mess, but the short of it… unless we care about connections, logins, logouts, spam and rbl checks, we can probably rule out anything without CHKUSER. There is still a lot of good stuff with CHKUSER that we haven't touched on yet. It seems to track all messages received by accounts as well as the sending information we've been focusing on. 

So making an executive decision, I think I want to keep all log lines with CHKUSER, but maybe without the SENDER.
Lets do some counts, and see how many rows we get with and without the "sender:".

#grep CHKUSER mail* | grep -c 'CHKUSER'
2350744

# grep CHKUSER mail* | grep -c -v 'sender:'
1364377

 

So removing those sender items drops the count in half, and is not about 15% of the total number of log lines. And we still have room to grow… so I am pretty happy with this process. Lets wrap all the files together, into one big log, and check the file size. 350mb… down a lot from the initial 2.5gb… and this is 5-6 weeks of logs, so this will be much smaller on a week to week basis.

So, now we know what we want out of the log files… so lets setup the process of getting the logs moved, and get the "goods" out of them… and in a position for our CFML Agent to do its job. As I mentioned in our previous post, we want to keep separation of duties on the servers, so we have Server A with Mail running, and the log files… and we want to move them to Server B… massage them, and prep them for CFML. 

Check back soon, because next we will setup our cron jobs on Server A.

Gavin

29
January 2014

Gavin Pickin

State of the Union - Source Control - Mr Adam Cameron is at it again

CFML Language, Chit Chat, Source Control

One of my fellow countrymen (Exported Kiwis), Adam "Vinegar and Beer" Cameron, has posted some interesting material on his blog. Well, there is no surprise there, his stuff is always interesting actually, but I should be more accurate and say, he has posted some "explosive" material on his blog.

My last post discussed a great CFML community survey, where you should go and answer a few simple questions, so we all, as a community can see how the community is looking, what we're using, what we're working with, etc etc. 

After you complete the survey, you can see up to the moment results, and one of the results got Adam going.

"What the Fuck are you thinking" was his response to the Source Control Question. 10% of developers don't use it, and another 10% are not using "REAL" source control, so 20% of the developers who answered this survey, are not using Source Control… and to many people, that is Ridiculous. Now, keep in mind, this is 20% of the people active in the community, which is probably a skewed result, so that 20% of the active community, might equate to 50% of the full community (COMPLETE GUESS ON MY PART).

I have posted about 14 posts on Source Control recently, as we have finally seen the light, put in the effort and have implemented Source Control. My first post in the series explained a lot about why a lot of people don't use it, and that was us up until not too long ago. 

I am not an elitist, I don't do everything right, I haven't been using Source Control for decades like some developers have… but little old me, I think you're crazy too if you aren't using it, or you're not moving that direction.

Read my post, and get some more information on why your reasons, are probably not as valid as you thought.
Not Using Source Control? Amazingly, You're Not Alone

Andy Allan also posted this great article on Source Control just recently too.
http://andyallan.com/version-control/

Now, you can read Adam's post here, and more importantly the 25+ responses in the few hours since he posted it.
http://cfmlblog.adamcameron.me/2014/01/to-that-20-of-you-out-there-what-fuck.html

Do you think he went too far? Or do you think his explosive posts get the attention they need to get the job done?
With Adobe, all the noise he makes seems to help. Let us know what you think.

Happy Reading,

Gavin

29
January 2014

Gavin Pickin

The State of the Union - The Important One - CFML State of the Union Survey

CFML Language, Chit Chat

Yes, it is that time of the year again… the State of the Union… not the USA Political Nightmare that dominates the TV Channels, but the CFML State of the Union. CF United might not be an active conference anymore, but they still do post items to their website, and the last couple of years, at the same time of the Political State of the Union, they post a survey where the CFML Community is encouraged to spend 5 mins, answer a few questions, and we can all see what others in the community are doing. The survey discusses Adobe ColdFusion, Railo, OpenBD, what IDEs you use, Source Control, Conferences, Front End Frameworks, etc etc, to get an snapshot of what is going on in the CFML community around us.

The link to the survey is here.
http://cfunited.com/blog/index.cfm/2014/1/28/State-of-the-CF-Union-survey-2014

When you complete the survey, you get a great little running snapshot of the results to that point in time. They have not worked out time travel yet, so you'll have to wait for future responses to be included. I believe it ran for a week or two last year, and they stated on their website they would announce the full results Feb 11, 2014.

So jump in, fill it out, it doesn't take long, its interesting, and the more information in it, the better it is.
Reach out to those who don't live on twitter, and read the blogs, and see if they'll help get some real data in there.

Thanks

Gavin

28
January 2014

Gavin Pickin

Server Admin - From Mail Logs to DB Stats in a CFML Dashboard - Part 1

CFML Language, Server Admin

Lately, I have had a lot more Server Admin / Dev Ops duties than in the past… I am really enjoying a different type of problem solving as I am trying to leverage Dev Ops to increase automation and efficiency. I thought it might be interesting to map out my latest little project, one for my own learning, two for the purpose of sharing, and three for feedback on what I am doing wrong and what I could improve. This is Part 1 of a short series as I explore my problem, and how I find a solution.

As a Web Hosting Company, we obviously host websites, and a lot of Email Accounts for our Customers. These days, with Spam, you have to really be on top of your game, otherwise in a blink of an eye, your servers can get blacklisted, and you're constantly fighting to get your servers off blacklists, because it hurts each and every email customer on that server. Bad passwords, accounts being hijacked, viruses, and users sending out too many messages, or possibly bad messages, are just some of the things you get blacklisted for.

Our mail servers are setup on CentOS, its the OS of choice for our Linux servers. Our mail servers run qmail, with Dovecot for Imap and Vpopmail and Squirrel Mail / Round Cube for our Webmail. We run Nagios for monitoring, and although Nagios is a great product, and can be highly configured, sometimes its really hard to identify potentially dangerous activities, until its too late.

With my experience using CFML and Databases, I thought, if I can run filters on the logs, looking for key situations, we can create a dashboard for email, to quickly identify when things start to go off track, and use trends and more detailed reports to identify problems, and be able to gauge out mail situation better.

I know a few of you are probably thinking, Gavin, what is wrong with you. There are the right tools for the job, and CFML is not the right tool for reading and interpreting log files. You are right, and I do not intend to do all the heaving lifting with CFML, with these log files being hundreds and hundreds of megabytes each, there needs to be a process performed, before CFML can do its piece… so we need to identify those steps, and how to get the data to a point CFML can use it.

Remember, just because you are good with ColdFusion, Railo, CFML, MySql MSSQL, JavaScript, jQuery, NodeJS, Angular JS, EmberJS etc, it doesn't mean its always the best tool for the job. I did look around for other solutions out there, but nothing fit well, so this is a journey on how to work with my situation, using tools I know, and tools I don't, to get to a final solution that works, and works well. 

Step 1 - What Do I have to Work With

First task on my list, to look at the log files, how they are stored, where they are stored, how are they archived etc, and how the logs identified each of the use cases I wanted to monitor.

Our system is setup so qmail dumps all its logs into /var/logs/maillog.

Every 7 days, the file is archived, with a datestamp, like maillog-20131112 for example and the maillog file is reset and fills up again over the next week. For long term stats, getting the weekly log file would work, but if we're a week, or even a day late on picking up on a potentially bad activity, we're going to be blacklisted, and have to play the recovery game… so we'll need to look at a more continual process. Also, with separation of duties, we do not want to be running CFML and Log Processing on the same machine, so we have to think about moving the files across the back network too.

Step 2 - What are the Cases I want to Identify in the Logs

Before I can decipher the log files, to look for our use cases we want to monitor, lets decide what they are.

  • Message Sent - No Authentication - Our servers require authentication unless they are using Webmail, which of course requires a login.
  • Message Sent - Authenticated - Same User - When authenticated as the same user as the "sender".
  • Message Sent - Authenticated - Different User - When the authenticated account was used to send messages from a different sender.
  • Message Not Sent - Authentication Not Provided - Relaying was not allowed, only providing a sender
  • Message Not Sent - Authentication Failed - Relaying was not allowed, with A sender and an authenticated user

Step 3 - Identify the Patterns in the Logs - Find Relevant Data

Looking at the log files, I look for trends and patterns, to help me identify what each entry is actually about. Looking through the logs, and there is instantly a lot of noise to filter out. Here is 10 lines at the time of writing this entry.

This is not very useful, so I have to do some grepping to find useful information. Using my own email address, I was able to find a bunch of emails sent to me, and from me, and this helped me quickly identify a couple of useful statements.

Jan 28 22:00:01 independence qmail: 1390975201.373003 status: local 0/10 remote 0/20
Jan 28 22:00:01 independence qmail: 1390975201.373089 triple bounce: discarding bounce/788060
Jan 28 22:00:01 independence qmail: 1390975201.373101 end msg 788060
Jan 28 22:00:01 independence smtp-mx: 1390975201.476590 rblsmtpd: 66.219.101.117 
pid 11238: 451 http://www.barracudanetworks.com/reputation/?pr=1&ip=66.219.101.117
Jan 28 22:00:01 independence smtp-mx: 1390975201.532184 tcpserver: end 11236 status 0
Jan 28 22:00:01 independence smtp-mx: 1390975201.532194 tcpserver: status: 2/100
Jan 28 22:00:01 independence smtp-mx: 1390975201.634301 tcpserver: end 11238 status 0
Jan 28 22:00:01 independence smtp-mx: 1390975201.634310 tcpserver: status: 1/100
Jan 28 22:00:01 independence dovecot: pop3-login: Login: user=<username@domainname.com>, 
method=PLAIN, rip=76.79.99.3, lip=209.164.17.131, mpid=11277, session=</fXLphXxGQBMT2MD>

 

If i grep 'client not allowed' maillog I see hundreds of lines like the following

Jan 28 21:32:14 independence smtp-mx: 1390973534.709575 CHKUSER rejected relaying: 
from <sysadmin@salvex.com::> remote <salvex.com:quicksmtp.salvex.com:204.232.190.243> 
rcpt <username@thedomain.com> : client not allowed to relay

So lets break it down, so we can understand it.

Jan 28 21:32:14
First piece… is the date… missing the year, but we can still parse that into a date time object for graphing over time.

independence smtp-mx
Next, we have our server name, and the service. In this case, i'm looking at my independence server, and we're looking at the smtp-mx service. 

1390973534.709575
Next is our message id, so we can identify this message, and we can look for other log lines referencing this. This can be very important, some mail loggers actually have 5 or 6 lines for each message id, and you have to do some serious merging of the lines to get real information. Luckily, with qmail, this is not so, as you can see, there is a lot of information here.

CHKUSER rejected relaying
Next, we see the actual status of the log line, CHKUSER is the method, and the status is rejected relaying. This means that the user is not local (i.e. webmail which is on a trusted ip and allowed to send) and the user did not pass the authentication.

from <sysadmin@salvex.com::>
This is the sender information. If you notice, there are some colons in the angle brackets. It took me a while to work out what these were all referring to, but you'll see soon enough.

remote <salvex.com:quicksmtp.salvex.com:204.232.190.243>
This is the remote address of the sender, and it includes the domain name, the reverse dns lookup and then the IP address of the user trying to relay the message. 

rcpt <username@thedomain.com>
This is where the message is supposed to be sent. This domain name has been changed, for a little privacy for our customers. But this is an internal email address… and that is why the service is smtp-mx. I will show you another example of a relay being rejected to an external address, and the service is in fact different.

client not allowed to relay
Finally, the end of the log has a little more information, which clearly states in english, the client is not allowed to relay.

Now, for comparison of the local versus remote recipient, here is another log snippet.

Jan 28 21:19:10 independence smtp: 1390972750.741663 CHKUSER rejected relaying: 
from <oltqs@oneofourdomains.com::> remote <gruporga-be3b0e:host51-70.brs.com.br:177.11.51.70> 
rcpt <teste3.pop3@hotmail.com> : client not allowed to relay

 

Now, here you see the rcpt or the recipient is a hotmail account. This is actually something we see popping up a lot, and without the start of this tool, we wouldn't have noticed. We see several attempts on random accounts on random domains, all to teste3.pop3@hotmail.com and yahoo.com etc. Basically, this robot goes and send out millions of messages, and if any of them make it to this account, they know that email address is vulnerable, and then use that to hammer the world with spam. Obviously, we do not have our servers setup as open relays, but if we did, this is how they fish for accounts.

I am going to cut this post short, I do not want to get too deep into logs on the first post. I hope you find it interesting enough to come back and see the next step, identifying our other use cases, and then how we go from knowing what to look for, to actually using it.

Thanks for joining me on my adventure,

Gavin

23
January 2014

Gavin Pickin

CFML - Time to short using ColdFusions UI - Do it Right

CFML Language, Chit Chat

Today, Adam Cameron and Raymond Camden announced a new community project, that is long overdue... ColdFusion UI - the Right Way. For some time now, ColdFusion's UI elements have been out of date, inefficient, and end up causing more harm than good. Its the "best practice" in the community to steer people clear of them, but there is no real resource for where to point them to. This is where this project comes in... you can tell them, go to this github repo, and you can see several alternatives to ColdFusions implementations, with links, notes, examples, to help you get things done, the right way.

What does Adam Cameron think of the Project?
With the title for his blog post being Oi! You bloody wankers! Stop using ColdFusion UI controls, I think it speaks for itself. 

Why do I think its a great project?
The main reason is, I am constantly looking for alternatives myself, a lot of the time, I find one, and use it on several projects, and I forget which one I used, or maybe because of an incompatibility, this plugin wont work because EXT and jQuery dont play nice, or something. It would be better than my own personal log, because we'd have the best in the cfml community contributing, and it can grow and evolve, like good Community Projects should.

Where to get started?
Raymond Camden's the main Repo - https://github.com/cfjedimaster/ColdFusion-UI-the-Right-Way
Go fork it, star it, and think about what UI elements you are using as alternatives to the ColdFusion UI 

What have I done so far?
In the spirit of the project, I have done a little work, and shelled out the beginning of the cfinput - datefield UI element. 

Since the project is brand new today, I just built a simple shell, and thought it would be good as a tester to play with how we want to structure the site, layout the elements, etc. Obviously, its plain, and ugly, we could throw some lightweight framework on it to pretty it up some, and then the key would be trying to keep everyone consistent. We could obviously structure it in such a way that it self documents too, and we could have the pages pull the sub-directories for links and titles, and try to make it grow itself. 

My Fork of the main Repo
https://github.com/gpickin/ColdFusion-UI-the-Right-Way

 

If you don't have an open source project, now you do.
If you have one, consider contributing to this one too... its a great cause... making cfml developers better.

Thanks for reading,

Gavin

22
January 2014

Gavin Pickin

Unit Testing - Online Learning - Great way to understand Dependency Injection

Angular, Dependency Injection, Online Interactive Learning, Unit Testing

We've done a lot lately with Unit Testing, and one of the items we touched on, only briefly, in the scheme of things, was Dependency Injection, and why it is a good practice to use. We also discussed why a Dependency Injection framework like ColdBox's WireBox, Sean Corfield's DI/1 from Framework/1, and Coldspring from the XML days are a great benefit, especially as your app gained momentum and size. 

I explained it here, and although I only touched on it, I know Dependency Injection is not an easy concept to grasp, the funny thing, once you actually do it, you think to yourself, wow, thats not complicated at all. Yesterday I blogged about the great videos available online at youtube for Angulars Conference ng-conf, and one of the ones I watched about Dependency Injection was priceless... so todays post is going to discuss that video, and then let you watch the video, and everything will click (I hope).

The first thing I want to say is, although its referencing Angular, the explanation in the first part of the video is universal, and you do not need to know anything about Angular to get the full value out of it. The examples are pretty standard in Dependency Injection documentation and wiring guides, where they use a Coffee Machine as a reference. Its funny, because their model is almost identical to ColdBox's Wirebox documentation, which has been around for quite some time, although I'm sure the Angular guys do very little CFML, its cool to see the similarities with CFML Dependency Injection and something that Google Engineers for Angular are working with.

The similarities do not end there. I admit, I haven't had time to look at Sean Corfields DI/1 yet (its on my to-do list), but I have had some exposure to WireBox, and watching this video, the patterns Angular implement look very familiar, and that is obviously because the top quality engineers in cfml look to the bigger industry at large, and study the design patterns, and other implementations, when bringing Frameworks to cfml. It made me feel good about the language that we love so much, and that we have some great quality tools in place, designed in the spirit of Open Source, but up to professional grade. 

That got me thinking about things that we do, how often do we go outside the box, and look outside of our language for design patterns and solutions. Maybe next time you hit a problem... look outside and see what you can do to lift the cfml world.

Enough talk.. check out the video :)

 

 

21
January 2014

Gavin Pickin

Conferences - Online Learning - Angular's NG-CONF Videos All Online

Angular, Conferences, Online Interactive Learning

I have talked a lot about conferences lately, and I apologize, because most of them are very expensive, you have to travel a long way, and there are 500 awesome conferences, and if we went to them all, we'd get no work done. Today, I talk about one we all missed, well, most of us... Angulars conference, ng-conf, which was held last week... January 16th and 17th in Salt Lake City, Utah. So why am I talking about it? If you read the title you'll know why. ng-conf, being a Google backed product, put all of there awesome sessions on Youtube :) I actually caught a few on the live stream, which was cool, as you could ask questions to the team during presentations etc, but the videos are just as great, after the fact.

NG-CONF's website is online at http://ng-conf.org/

Read about all of the great speakers and their topics. I have thoroughly enjoyed all of the videos I have watched so far. I am new to Angular, but these videos make it easy to pick up, and peaks my interest, as they are doing some amazing things.

The videos on youtube found here https://www.youtube.com/user/ngconfvideos

I have a couple of favorites already, which I will write a post about very soon.
Check it out, its great material whether you know angular, want to know angular, or just have the learning bug in general.

Enjoy,

Gavin

 

21
January 2014

Gavin Pickin

Techie Gotcha - SSL Certificate Problems with Apache and Issuer Chain

Apache, OpenSSL, Server Admin, Techie Gotchas

I am sure SSL Certificates are not new to most Web Developers, unless you have the luxury of an Admin team. I have been using them for years, most of those years admittedly on Windows, but the last several years we have been migrating all of our windows boxes over to Linux (as you could tell from the majority of my posts). Just recently though, something new came up, which I had not seen before, so I thought I would share my experience, for anyone else looking for a solution out there.

A customer had called, and stated that their Security Certificate was not working correctly on their site. We had just renewed it, installed it, and tested it. So I do the usual tests, I pull it up in my browser, check for http calls on an https page, check the certificate information, all looks good. Then we check with the customer, and they tell us they are using Firefox. My default browser is chrome, so I reach for Firefox and sure enough there are SSL problems with the cert. Firefox shows the following error message.

www.domainname.com uses an invalid security certificate.
The certificate is not trusted because no issuer chain was provided.
(Error code: sec_error_unknown_issuer)

Interesting, isn't it? 

I checked Safari Chrome and Opera, all good, and apparently IE is the same, but Firefox, as of version 23/24 seems to have added an additional security check, where Firefox checks the whole issuer chain. Apparently IE and some browsers automatically download the Chain for you behind the scenes, if possible, but Firefox cannot or does not.

Now, one thing I noticed was none of my SSL certificates on my Windows boxes have any ssl issues, so I assume that IIS on those machines takes care of the issuer chain. I know when you download an SSL Certificate you get the Certificate File and a Bundle… so obviously that is used for exactly this. 

To verify what is actually going on, I used a cool little tool from SSLShopper.com… which is their SSL Checker.
http://www.sslshopper.com/ssl-checker.html

When you run it for one of the domains we host (tweaked to reproduce the issue for this article) you will see something like this.

If you scroll down, you will see the SSL Cert and the Chain of Issuers, and in this case, its broken, as you can see below.

The main reason I'm blogging this, is because all of the information i found all pointed to Mozilla, and has a lot of information on how someone should fix their browser. Looking at this information, its not a browser issue really, there really is an issue with the way the SSL is installed and being used. 

I did some more digging, and found more information about how to set your SSL in your Apache Configuration files with OpenSSL… and noticed, I was missing one particular field.

SSLEngine On
SSLCertificateFile /PathToMySSLs/2014.www.donlucas.com.cert
SSLCertificateKeyFile /PathToMySSLs/2014.www.donlucas.com.key
SSLCertificateChainFile PathToMySSLs/intermediates/sf_bundle-g2-g1.crt

The last line above was missing in this particular virtual host, so I downloaded the bundle file with the Cert again. Stored the Bundle file in an Intermediates folder and added the line to my Virtual Host, and reloaded my Apache Httpd Config and reloaded the domain in Firefox, and success.

I check the site again in SSL Shopper and I see the following result.
Now the Certificate has been listed correctly

If we scroll down, we see the full chain, and this time, there is no broken links.

So that seemed simple enough, but something was still bothering me. I know I had seen the SSLCertificateChainFile configuration before, so I decided to go through my other SSL certs setup in Apache with OpenSSL… and I realized, some of our sites were using that configuration, and setup to use sf_bundle.crt. Why were those domain names failing though? 

It looks like the more recent renewals were using a different Intermediate Bundle to complete the Issuer chain.

 

What does this mean? 

This means even if you have been and were using the SSLCertificateChainFile config on your virtual hosts, you need to ensure you check the Issuer Chain with a tool like SSL Shoppers' SSL Check, to ensure the Intermediate Bundle you were using, is still valid

Of course, you can download the Bundle and install it each and every time, but I'm not sure if the SSL Providers are going to name them enough to be able to differentiate versions.

 

So why is Firefox different?

Some say because of SSL Certificate vulnerabilities of providers like Comodo, Firefox wants to err on the side of being more secure, and enforce the Server to install and reference all the Intermediate Bundles, because its seen as a security problem.

There is great debate on Mozilla forums (just one of the many links here) for and against this stand… interesting that they are the only browser not downloading the Intermediate Certs automatically.

SSL Shopper has more information on this topic here
http://www.sslshopper.com/ssl-certificate-not-trusted-error.html

I do not know how helpful this is, but blogging it helps me log my work, and I'm sure someone will come across this type of issue at some point.

Thanks for reading,

Gavin

20
January 2014

Gavin Pickin

Conferences - Why did I want to speak at Into the Box

cfObjective, ColdBox, Conferences

I was talking to a couple of people about the upcoming Conferences, Into the Box May 13th, and cf.Objective() May 14-16, both in Bloomington, MN, and they asked, why did I want to speak at Into the Box, when I was already speaking at cf.Objective(). Here are a few questions and my answers, peek into my thought process.

 

What made me want to speak at Into The Box 2014?

I am just starting to get the Speaking bug in general, and after being picked for cf.Objective(), I couldn't miss the opportunity to try and speak at Into the Box too. My first real big presentation to a community conference like event was ColdBox Developers Week, where I presented on the ColdBox Koans and how to use Koans to help with Test Driven Learning. The ColdBox team gave me a chance, and I enjoyed it. Now they might have created a monster, as it gave me the confidence to reach out and try and speak on more topics, at more events. Since Team ColdBox gave me the start, I wanted to contribute back when I heard they were doing their own Pre-Conference.

Another very important reason was that it was an opportunity to extend my trip to Bloomington, and to spend time with so many of the friends I've made over the last year or so, within the ColdBox and ColdFusion community in general. In 2013 I was in Bloomington for the ColdBox Bootcamp followed by cf.Objective(), I had 6 days straight of code and community, and at the end of it I walked away inspired to do more, give back more, code more, and couldn't wait for the next opportunity.

 

What does web development mean to me?

Apart from my Family and Friends, which I adore, Web Development is a big part of my life. 

It provides for my family, through my work.  It provides me a never ending opportunity for addictive learning development and growth. It helps me find solutions to everyday questions, because I can build an app for that. Cheesy, but true.

Web Development is what I read about on my iPad, its what I watch videos about when I work out, what I think about when I can't sleep, its what I do for fun, its just awesome I can get paid for it too. 

 

What advice would I give to my younger self, knowing what I know now?

Keep reaching out for the community. When I started, the community didn't exist like it did now… but I am pretty late to the community party. The community in the Web Development world is a great one, especially ColdFusion. With Twitter, all of the blogs, Google+, Facebook, Meetup, Online Meetings, and all of the other social media tools, you are so much closer to the community than we have ever been before.
If I could go back, that is the one big thing I would have changed… because I think community helps to motivate you, teach you, help you, and its worth its weight in gold… and only costs you in time. 

 

What am I looking forward to most from Into the Box?

I really look up to a lot of the speakers we have at Into the Box… with 2 of the big names in ColdFusion OpenSource Frameworks Sean Corfield and Luis Majano, I really want to absorb all I can. The speakers overall have a wide range of experience and specialities, and there is always something I can learn that can make me a better Developer / Community Member. 

The most important piece will be time with a great group of developers, a great group of business people, a great group of minds, and a great group of people.

 

What other session am I looking forward to at Into the Box?

Testing is really big right now, and I am very interested in Sean's "An Introduction to Behavior-Driven Development (with TestBox)" presentation. [EDIT - Updated the presentation name ]

The rest of the sessions, ORM, Security, REST, Enterprise Architecture, Dependency Injection, NoSQL and Legacy Migrations, they are all so hard to pick between, as there is a lot of great content… to be honest, there might be some last minute calls on which sessions I will be attending. 

 

I'm really looking forward to my talk... I'm speaking about Mockbox with "Just Mock It". 

Just Mock It… Mock what? What Mock? 
Learn What is Mocking, and how to use Mocking with ColdFusion testing, development, and continuous integration.
Look at Mocking and Stubbing with a touch of Theory and a lot of Examples, including what you could test, and what you should test… and what you shouldn't test (but might be fun).

 

More information on Into the Box here www.intothebox.org
More information on cf.Objective() here www.cfobjective.com

Hope to see you all there.

Gavin

 

Blog Search