Archivio

Archivio per la categoria ‘Web development’

Redmine as project management on OpenBSD

7 agosto 2011 Nessun commento

Redmine is a web-based project management and bug-tracking tool, it includes calendar and Gantt charts to aid visual representation of projects and their deadlines and supports multiple projects. The integration throughout the entire system is excellent and we can create nested subprojects and move issues/tickets from one project to another. For each project we are able to assign different users and turn certain functionality (milestones, time tracking, source control,..) on and off. This article describes how to install Redmine on OpenBSD 4.9. We will be using the official OpenBSD port from CVS. We will start by installing the prerequisites as binaries because if you have a fast internet connection, it is faster than building the ports from src. Lets install Ruby-On-Rails:

pkg_add -i ruby-1.9.2.136p0

pkg_add ruby-gems

Official releases include the appropriate Rails version in their vendor directory. So no particular action is needed. If we checkout the source from the Redmine repository, we can install a specific Rails version on your machine by running:

gem18 install rails -v=2.3.11

Install Rack:

gem28 install rack -v=1.1.0

gem18 install -v=0.4.2 i18n

gem18 install mysql

Redmine

It is recommended that the majority of users install the proper point releases of red mine. It is not recommended to install redmine from trunk.

Installation procedure

  • Get the Redmine source code by either downloading a packaged release or checking out the code repository. See Download.
  • Create an empty database and accompanying user named ”redmine” for example.

create database redmine character set utf8;
create user ‘redmine’@'localhost’ identified by ‘my_password’;
grant all privileges on redmine.* to ‘redmine’@'localhost’;

  • Copy config/database.yml.example to config/database.yml and edit this file in order to configure your database settings for “production” environment. Example for a MySQL database (we have also to specify the mysql socket file):

production:
adapter: mysql
database: redmine
host: localhost
port: 3307
username: redmine
socket: /var/www/var/run/mysql/mysql.sock
password: my_password

  • Generate a session store secret.

rake generate_session_store

  • Create the database structure, by running the following command under the application root directory. It will create tables and an administrator account.

RAILS_ENV=production rake db:migrate

  • Insert default configuration data in database, by running the following command:

RAILS_ENV=production rake redmine:load_default_data

  • Test the installation by running WEBrick web server:

ruby script/server webrick -e production

Once WEBrick has started, point your browser to http://localhost:3000/. You should now see the application welcome page:

redmine login

Fuzzy hashing PHP Extension on OpenBSD 4.9

30 giugno 2011 Nessun commento

For years, computer forensic investigators have put a great deal of stock in the effectiveness of MD5 hashing. Now to quantify that statement, I mean specifically using MD5 hashes to identify known malicious files. The key word in that sentence is known, but let’s take that one step further to add the word “unmodified” known files. One minor change to a file, and the MD5 hash is now completely different, rendering the investigators search totally ineffective. So, what’s the answer? Easy, fuzzy hashing.

Fuzzy hashing allows the discovery of potentially incriminating documents that may not be located using traditional hashing methods. The use of the fuzzy hash is much like the fuzzy logic search; it is looking for documents that are similar but not exactly the same, called homologous files. Homologous files have identical strings of binary data; however they are not exact duplicates. An example would be two identical word processor documents, with a new paragraph added in the middle of one. To locate homologous files, they must be hashed traditionally in segments to identify the strings of identical data.

Download the ssdeep package from the following link. Once you downloaded it you have to extract and compile the package. The commands to perform these operations are:

wget http://downloads.sourceforge.net/project/ssdeep/ssdeep-2.6/ssdeep-2.6.tar.gz?r=http%3A%2F%2Fssdeep.sourceforge.net%2F&ts=1309466525&use_mirror=ovh

tar zxvfp ssdeep-2.6.tar.gz

and then

./configure && make && make install

Once you finished to install the tool you can perform this test: (ssdeep -V)

We have installed the ssdeep tool and it’s time to proceed to install the ssdeep PHP extension. Before to proceed you have to install the pear and the autoconf packages in order to allow you to use the pecl command:

pkg_add -i pear-1.7.2

pkg_add -i autoconf

export AUTOCONF_VERSION=2.64

After that you can try to install the PHP extension with the pecl system (pecl install ssdeep) but on my side I raised some issue related to the name of the libfuzzy.so and the error was

checking for ssdeep… configure: error: “Could not find ‘libfuzzy.so’. Try specifying the path to the ssdeep build directory.”

If you check in the /usr/local/lib folder you should find instead the libfuzzy.so.2.0 library. In order to avoid this issue I decided to install the extension manually. I download the pecl package from the original path (here the link to download it). Once you downloaded the package you have to extract it and run the command related to PHP. Here below the commands required:

wget http://pecl.php.net/get/ssdeep-1.0.2.tgz

tar zxvfp ssdeep-1.0.2.tar.gz

cd ssdeep-1.0.2

phpize

./configure

As you can see from the configure output you raised the same error, so you have to edit the configure file and change the name of the libfuzzy.so required with the libfuzzy.so.2.0 at line 4174.

SSDEEP_LIB_FILENAME=”lib$SSDEEP_LIB_NAME.so.2.0″

Now you have to perform the make and make install command

make && make install

You should get the following message from the shell:

The last step is to add the ssdeep.so extension in the php.ini:

nano /var/www/conf/php.ini

and add the following line in the extension section:

extension=ssdeep.so

Once you saved the file you have to restart the Apache service:

apachectl stop

apachectl start

If you check the phpinfo file, you should get the following section:

If you want to check the fully functionality of the extension installed you can copy the “example.php” file stored in the pecl package and run it into your webserver. In order to do that you have to perform this command (we supposed to be in ssdeep-1.0.2 folder):

mv examples/example.php /var/www/htdocs/

and the output should be this:

Preventing Duplicate Content

17 marzo 2011 Nessun commento

Duplicate content is a problem with many websites, and most webmasters don’t realise they are doing anything wrong. Most search engines want to provide relevant results for their users, it’s how Google got successful. If the search engine was to return five identical pages on the same page of the search results, it’s not likely to be useful to the searcher. Many search engines have filters in place to remove the duplicate listings – this keeps their search results clean, and is overall a good feature. From a webmaster’s point of view however, you don’t know which copy of the content the search engine is hiding, and it can put a real damper on your marketing efforts if the search engines won’t show the copy you are trying to promote. A common request is to be able to remove or redirect the “index.php” from appearing in the url. This is possible only with server-side technology like “.htaccess” configuration files or your main server config by using the Mod_Rewrite Rewrite Module. Duplicate content occurs when the search engine finds identical content at different URLs like:

www and non-www

http://www.iwebdev.it and http://iwebdev.it

In most cases these will return the same page, in other words, a duplicate of your entire site.

root and index

http://www.iwebdev.it (root) and http://iwebdev.it/index.php

Most people’s homepages are available by typing either URL – duplicate content.

Session IDs

http://www.iwebdev.it/project.php?PHPSESSID=24FD6437ACE578FEA5745

This problem effects many dynamic sites, including PHP, ASP and Cold Fusion sites. Many forums are poorly indexed because of this as well. Session IDs change every time a visitor comes to your site. In other words, every time the Search engine indexes your site, it gets the same content with a different URL. Amazingly, most search engines aren’t clever enough to detect this and fix it, so it’s up to you as a webmaster.

One page, multiple URLs

http://www.iwebdev.it/project?category=web&product=design and http://www.iwebdev.it/project.php?category=software&product=design

A product may be allocated to more than one category – in this case the “product detail” page is identical, but it’s available via both URLs.

Removing Duplicate Content
Having duplicate content on your site can make marketing significantly more difficult, especially when you are marketing the non-www version and Google is only showing the www version. Because you can’t tell the search engines which is the “original” copy, you must prevent any duplicate content from occuring on your site.

www and non-www
I prefer to use the www version of my domain (no particular reason, it seems to look better on paper). If you are using Apache as your web server, you can include the following lines in your .htaccess file (change the values to your own of course).

RewriteCond %{HTTP_HOST} ^iwebdev.it
RewriteRule (.*) http://www.iwebdev.it/$1 [R=301,L]

If your webhost does not let you edit the .htaccess file, I would consider finding a new host. When it comes to removing duplicate content and producing search engine friendly URLs, Apache’s .htaccess is too good to ignore. If your website is hosted on Microsoft IIS, I recommend ISAPI Rewrite instead.

Remove all reference to “index.php”
Your homepage should never be referred to as index.htm, index.php, index.asp etc. When you build incoming links, you will always get links to www.iwebdev.it – your internal links should always be the same. One of my sites had a different pagerank on “/” (root) and “index.php” because the internal links were pointing to index.php, and creating duplicate content. Why go to the trouble of promoting two “different” pages at half strength when you can promote a single URL at full strength? After you have removed all references to index.php you should set up a 301 redirect (below) to redirect index.htm to / (root).

Remove Session IDs
I can give advice for PHP users, ASP and CF users should do their own research on exactly how to remove these. With PHP, if the user does not support cookies, the Session ID is automatically inserted into the URL, as a way of maintaining state between pages. Most search engines don’t support cookies, which means they get a different PHPSESSID in the URL every time they visit – this leads to very ugly indexing. There is no ideal solution to this, so I have to compromise. When sessions are a requirement for the website, I would rather lose a small number of visitors who don’t have cookies, than put up with PHPSESSID in my search engine listings (and potentially lose a lot more visitors). To disable PHPSESSID in the URL, you should insert the following code into .htaccess

php_value session.use_only_cookies 1
php_value session.use_trans_sid 0

This will mean visitors with cookies turned off won’t be able to use any features of your site that use sessions, eg logging in, or remembering form data etc.

Ensure all database generated page have unique URLs
This is somewhat more complicated, depending how your site is setup. When I design pages, I’m always wary of the “one page, one url” rule, and I design my page structure accordingly. If a product belongs to 2 categories, I ensure that both categories link to the same URL, or modify the content significantly on both versions of the page so it’s not “identical” in the eyes of the search engine.

301 Redirections
A 301 redirect is the correct way of telling the Search engines that a page has moved permanently. When you still want the non-www domain name to work, you should 301 redirect the visitor to the www domain. The visitor will see the address change and Search Engines will know to ignore the non-www and use the www instead.  Use your .htaccess to 301 redirect visitors from index.htm to / and any other pages that get renamed. eg.

redirect 301 /index.htm http://www.iwebdev.it/

LAMPP vs LD_LIBRARY_PATH

14 dicembre 2010 Nessun commento

Al fine di velocizzare il processo di installazione di una web application spesso si ricorre a package già “pre-configurati” dove l’unica operazione necessaria è la decompressione dell’archivio. Finché l’applicazione web è “semplice” non si riscontrano problemi ma quando è necessario, per esempio, eseguire comandi nativi unix o comandi di terze parti (generatori di barcode, TSK, tool di scan o simili) iniziano i problemi legati alle librerie. L’errore a cui ci si trova di fronte è il seguente:

/usr/local/bin/fls: /opt/lampp/lib/libcrypto.so.0.9.8: no version information available (required by /usr/local/lib/libafflib.so.0) /usr/local/bin/fls: /opt/lampp/lib/libz.so.1: no version information available (required by /usr/local/lib/libewf.so.1) /usr/local/bin/fls: /opt/lampp/lib/libgcc_s.so.1: version `GCC_4.2.0′ not found (required by /usr/lib/libstdc++.so.6)

Il problema fa riferimento ad un conflitto di versione delle librerie  presenti all’interno della directory /usr/local/lib e /opt/lampp/lib. Da una prima analisi si potrebbe obiettare come l’alternativa più immediata sia quella di installare le singole componenti anziché un pacchetto pre-configurato; d’altro canto questa strada non è la soluzione al problema ma piuttosto un workaround. Durante la fase di debug si è visto come il seguente codice:

putenv(“LD_LIBRARY_PATH=/usr/local/lib”);
echo shell_exec(“/usr/local/bin/fls -V 2>&1″);

risolvesse completamente il problema. Si è scoperto quindi che la natura del problema è da ricercarsi all’interno della variabile d’ambiente LD_LIBRARY_PATH. La variable d’ambiente associata alle librerie può essere modificata in diversi file del sistema operativo (/etc/ld.so.conf, bash.bashrc, .profile, putenv via PHP) ma per essere il meno invasivi possibili è stato scelto di modificare lo script di avvio del servizio LAMPP. Aprendo infatti il file /opt/lampp/lampp, nel dettaglio alla riga 100,  si legge l’istruzione:

export LD_LIBRARY_PATH=/opt/lampp/lib:$LD_LIBRARY_PATH

Modificando l’istruzione precedente in:

export LD_LIBRARY_PATH=/usr/local/lib

vengono risolti tutti i problemi dovuti alle dipendenze richieste durante l’esecuzione del comando d’esempio. Riavviando il servizio /opt/lampp/lampp restart infatti si ottiene l’output corretto.