With recent breaches like Heartbleed, Internet Explorer vulnerability and credit-card data theft against businesses like Target, Michaels and Neiman-Marcus, web security is on people’s lips (and keyboards) more than ever. For the average web developer, there are a number of standards and best practices to employ – some second nature, some forgotten. Here are four security guidelines all developers should be following.Never Trust the InputAs developers, we have to assume some of the folks using our end product will have malicious intentions. If we let people do whatever they want with their input, we’ll find ourselves giving out information we shouldn’t, ruining others’ web experiences or bringing down the site entirely. This is a broad category, but there are several steps that cover a good swath of territory:
- Never insert user input directly into a database query. SQL injection is rarely an issue anymore, but one sloppy design can cause some issues. You should always use user input as validated parameters into a stored procedure or other parameterized query. Don’t let little Bobby Tables bring your site to a halt.
- Never show raw HTML markup generated from a user. Any time you take submitted text, you should either strip out tags or ensure the text is escaped. For simple fields like name and phone number, you can limit the accepted characters to just alphanumeric characters and some basic punctuation like apostrophes, hyphens and parentheses. For more-complicated fields like page or module content, you can disallow certain tags like <script> to make sure only basic formatting will be reflected.
- Don’t only use client-side validation. While it can give someone quicker feedback regarding his input, bad data can still get through a form via script disabling or modification. Always check input on the server side, whether or not it’s being checked on the client side. Also, always validate query-string parameters – those are easily changed!
- Whenever dealing with sensitive information, use HTTPS instead of HTTP. If passwords, credit-card numbers or other information are passed outside of HTTPS, a third party may potentially “sniff” the requests and obtain this information.
- Use custom error pages. The default .NET error pages (“yellow screens”) will display information about server data and code structure that could help a malicious person find ways into your application’s sensitive data. Use a more generic, but helpful, error page for when something goes awry and log any pertinent troubleshooting information on the server.
- Don’t let people browse the directories of your website. Log files should not be accessible from the Internet. If someone can guess at a publicly accessible log directory or file name, he may uncover a world of possibly valuable information that can seriously compromise security.
- Access databases securely. If possible, use integrated security to access your database server so database credentials are not present in any files. Any connection info (like server locations and passwords) or other sensitive data should be kept in a secured configuration file (such as Web.config) and, if possible, stored using machine-level encryption. MSDN has an article on encrypting configuration information that will help in this task.
- Don’t store sensitive information in cookies. Passwords, account numbers, social security numbers – all of these may be either retrieved or changed.
- Encrypt cookie information. Alternately, you may store a reference to a server location or some other lookup tool within the cookie.
- Set expiration dates on your cookies. The longer the person has the cookie, the more time someone has to modify the data within. Shorten cookie-expiration dates, taking into consideration what makes practical sense for you.
- Consider using the Secure and HttpOnly properties of the cookie. The Secure property allows the cookie to be transmitted over SSL only. The HttpOnly property stops scripts from accessing cookie information. These aren’t always properties you want to use, but they can certainly help.
- Always release resources after you use them, even in case of an error. For example, if you’re reading a file for a user and an exception occurs, a try/catch/finally structure with the file reader closed in the finally block will ensure the opened file won’t continue to eat up system resources after things go wrong.
- If you’re using IIS, there are several configuration options that can help guard against malicious activity. For instance, you can set the maxRequestLength setting to limit the size of user uploads. If large uploads are necessary, the RequestLengthDiskThreshold property can help reduce those posts’ memory overhead. Process throttling will prevent an application from taking up too much CPU time.
- When making database queries based on user input and showing the results, make sure the displayed results fall within some reasonable bounds, either by a hard limit or through pagination.