Normalization is defined to be a formal technique for analyzing relations based on their primary keys and functional dependencies (codd, 1972b). The ultimate aim of normalization is to minimize the amount of redundant data to achieve accurate representation of the data and its relationships and constraints. In order to have a normalized database, a series of tests are often performed on the relation to determine if it satisfies or violates the requirement of a given normal form. In doing that, the database designer usually ends up with set of tables concerned with a limited part of the data. Storing data in normalized tables will of course reduce the total amount of redundant data and will eliminate update anomalies, but when application programmers try to create user front ends they will be obliged to use multiple joins, sub queries or views as data source and that might slow down the application.
With large database and multiple concurrent users reading or writing data extracted from many joined tables of course will slow down the application but this should not deviate a database designer from normalizing the database, the database designer based on his/her understanding of the requirement should anticipate performance problems and suggest and implement appropriate solutions, for example if it is known that certain data will be needed by x number of users who will be running the same SQL then stored procedures can be used, temporary tables or views can also be used, data warehouses or other solutions can be suggested.
Normalization is based on proved mathematical set theories and there is no doubt that in most cases normalized database tend to improve the performance of the data base server and reduce the storage space and in case a performance problem is identified, the database designer should suggest alternative solutions but not on the cost of normalization as the advantages are far beyond the performance penalties.