INFORMAZIONI SU QUESTO ARTICOLO

Cita

Scope

The design of safe and durable concrete structures made from standard as well as high performance concretes, and the maintenance and rehabilitation of the built infrastructure, both require an in-depth understanding of the complex heterogenous aging composite material concrete.

Questions pertaining to the ultimate capacity, serviceability, and durability, as time variant problems have to be faced in all phases of a concrete structure’s life-time, from design, operation and maintenance up to demolition. In the design phase, multi-decade predictions of phenomena are required, which by far exceed all observations. The application of new materials requires the formulation of design codes without the decade or even century long experiences that the existing codes are based on. During operation, the condition and performance of existing aging but also deteriorating structures need to be assessed and extrapolated based on incomplete information.

The challenges ahead, that are, multi-decade extrapolations in time, based on short-term observations, quantification of small failure probabilities relevant for structural safety based on limited measurements, and the predictions for unseen sizes and conditions based on limited practical experience, can only be answered by an integrative approach bringing together theory, numerical simulation and experiment. Only formulations and models that are derived from sound theoretical concepts, calibrated and validated by thorough (laboratory) tests can be expected to be accurate enough and have sufficient predictive capabilities to address the future demands in concrete engineering. The following chapters represent a brief summary of topics and challenges associated with the analysis and design of aging concrete structures.

As detailed in the introduction, the driving force behind research on concrete as an aging material stems from questions arising on the structural scale—assessing a structure’s current and predicting its future condition. This requires suitable numerical simulation techniques to capture the structural response at any point in time, which have to be calibrated and validated using measurement (monitoring) data. The identification of unknown material properties and boundary conditions is performed as an inverse analysis, which represents a typically nonlinear optimization problem. The introduction of stochastic models attempting to capture the uncertainty in the model and its inputs, both on the action and resistance side, in combination with suitable reliability engineering tools finally allow a performance assessment in terms of failure probabilities. Time-dependent phenomena and deterioration processes are captured by prediction models, allowing a service-life assessment. In order to keep the modeling uncertainties small, the material model for concrete has to be calibrated, and finally, validated based on a full fracture-mechanical characterization by destructive testing of small-scale samples in the laboratory. Several articles covering elements of the above listed research fields have been published. Yet, still many simplifications have to be made due to the lack of models and even basic understanding of involved phenomena, which, in many cases, cannot be answered on the structural scale and require refined analyses on lower scales. A major shortcoming of previous performance assessments is associated with lacking models for concrete aging, coupled with shrinkage and creep effects, and their interaction with other deterioration mechanisms—topics that are currently being addressed by the ongoing research.

Introduction

The societal relevance of the construction industry can be highlighted by the fact that almost half of the total national wealth is tied up in long-lasting building investments. In recent years, the steadily increasing average age of our infrastructure together with budgetary constraints and an increasing awareness towards climate change and sustainability considerations have led to the development and promotion of the life cycle cost (LCC) concepts. Up to now, maintenance aspects did not or hardly entered the decision process regarding the construction of new buildings or structures. Furthermore, even maintenance strategies were mostly developed based on the allocation of available funds for short term financial periods, for example, for 6-year budgets. As a consequence, in many cases, the initially cheaper design has been chosen over a more sustainable, durable, and ultimately cost-efficient solution, completely ignoring the associated significantly higher maintenance costs that accumulate over the life-time. Today, it is clear that in addition to the traditional design criteria—the ultimate load capacity, serviceability, initial costs, aesthetics, also durability and sustainability aspects, maintenance costs up to the costs for demolition and recycling have to enter the decision process associated with the construction of new structures. Recently, new design concepts taking into account the aspects of life cycle design were proposed (Mark et al., 2013).

Although the concept of LCC is sound and well formulated, from a financial point of view, many uncertainties remain according to Frangopol (2011). For one, the development of interest rate and costs over half a century, as required for this type of calculation, is highly uncertain. Of even higher significance for quantification, and ultimately optimization of LCCs is the uncertainty associated with the prognosis of a structure’s future condition, with and without intervention (Frangopol and Estes, 1999). Considering the required extrapolation to several decades, quantifying the short term and long-term effects of various available prevention, protection and rehabilitation strategies on a structure’s condition poses a significant challenge. The current practice in bridge engineering and increasingly also in other fields of structural engineering is based on qualitative condition ratings, in general, characterizing a structure’s adequacy to fulfill various requirements, usually based on visual inspections. These demands range from pure aesthetics over serviceability related issues such as crack width and deflection to bearing capacity and safety. Many of the defined criteria serve as indicators for the phenomena that are not directly accessible in course of a visual inspection but are of significance for a structure’s performance. All mostly code based and typically structure-specific requirements can be formulated by means of limit state equations, either in terms of structural response quantities or equivalent life-times (Ang and Tang, 2007).

It is important to note that the current design standards follow a prescriptive design approach, which is generally deemed to be conservative, is based on long-term experience, and does not require accurate prediction models for time-dependent processes. For example, the durability of structures subject to standardized exposure classes is deemed to be guaranteed for a reference life-span if a set of prescriptions, for example, concerning the water/cement ratio and type of cement to achieve a concrete strength class and sufficiently low permeability, the minimum concrete cover to reinforcement, a maximum allowable crack widths, and so on, are satisfied (Matthews, 2007). This approach is limited to well-established materials and design situation where a performance-based approach in generally applicable. However, such an approach requires the direct definition of a function that a structural component has to fulfil for a given life-time. Then, all the required material properties, and geometrical features automatically follow, provided that a suitable framework and sufficiently predictive models for structural response and time-dependent phenomena are available.

Considering the above sketched situation, it becomes clear that sound models for structural analysis as well as reliable prediction models for time-dependent processes are paramount for the construction of new and the maintenance of existing structures within the frameworks of life cycle cost analyses and sustainability. In order to accurately identify, assess and predict a structural system’s performance and full safety potential, all influences that a real structure is likely to face have to be captured accurately. For concrete, these are (mildly reinforced as well as prestressed, depending on the type of structure), among others, aging, cracking, shrinkage, creep, steel relaxation, carbonation, corrosion, freeze and thaw, high temperature, temperature cycles, and fatigue. While these mechanisms have been studied individually, significant work regarding their interaction, coupled evolution in time and associated statistics are much scarcer or missing.

Experimental investigations provide the required insight into the above mechanisms, and finally, allow both calibration as well as validation of simulation techniques, constitutive formulations and degradation models (Wendner et al., 2015b). However, only the models formulated based on sound and theoretically derived formulations provide the necessary predictive capabilities that are paramount for multi-decade predictions. The danger of systematically wrong prognoses associated with purely empirical models may be significant and cannot be quantified. Considering further the rapid progress in concrete technology, an integrative approach is required, comprising theoretical considerations and carefully chosen experiments that are supplemented by simulations. These virtual tests, calibrated and validated by experiments, allow an in-depth study of relevant mechanisms, parameter sensitivities, and model uncertainties.

In general, identifying the current and predicting the future performance of a structural element and especially a full structure is a complex task. As portrayed in Figure 1, typically unknown or uncertain boundary conditions (Strauss et al., 2012; Wendner and Strauss, 2015), material properties or geometrical characteristics need to be identified based on an observed current behavior. After being able to describe the past and the current response, a structure’s performance can be assessed and projected into the future (Strauss et al., 2009; Strauss et al., 2013).

Figure 1

Performance assessment and prediction

Abbildung 1. Zustandsbewertung und Vorhersage

The challenges in the latter stem from the large uncertainties associated with predicting changes in structural response, deterioration processes and even anticipating the development of mechanical as well as environmental loads for as long as a century, leading to a comparably large scatter in the expected life-times. Figure 2 illustrates the large uncertainty in life-time predictions based on a small variation in model response. Furthermore, the safety margin as distance between the required/admissible value and threshold, both in terms of the structural response quantity (here deflection) and life-time, is presented. Structural monitoring data (Strauss et al., 2011) in combination with suitable updating techniques such as Bayesian updating (Bažant et al., 1984) improve the quality of predictions based on time-discrete as well as continuous current and future information.

Figure 2

Limit state definition and prediction uncertainty

Abbildung 2. Grenzzustandsdefinition und Vorhersageunsicherheit

Aging and deterioration of concrete structures are generally relevant aspects, both for the design of new as well as the assessment of existing structures. During the design of new structures, especially statically indeterminate frames, the stiffness evolution and, thus, the potential evolution of constraint forces and moments is a question of interest. For jointless bridges, additionally, the design for temperature loads, creep and shrinkage can become the governing load combinations aside from traffic loads. In order to control the development of constraint loads and also improve a structure’s durability, innovative design concepts have been developed in Austria. Investigated aspects comprised temperature effects, creep and shrinkage, the development of active and passive earth pressure as well as the functionality of the inclined approach slab solutions without expansion join (Strauss et al., 2011; 2012; Wendner and Strauss, 2015).

Historically, resonance due to ambient excitations is, contrary to steel constructions, not a design situation that is typically considered for concrete structures, with the exception of high-speed train connections, super-long bridges, or world-record sky scrapers. However, the development of high-performance concretes now extends the range of possible applications to the types of structures that were previously unthinkable. Long and slender foot bridges made from fiber-reinforced ultra-high performance concrete, for example, face similar design problems as the ones that have to be solved for steel girders such as resonance or web-buckling, to name a few. With this development tuned mass dampers and tuned liquid column dampeners also gain relevance (Wendner et al., 2007; Reiterer et al., 2008).

Lightweight, normal-strength as well as higher strength concretes see a gradual evolution in their properties, resulting, for example, from changes in the cement composition and the availability of new admixtures and additives. While these slightly change the mechanical properties, for example, after 28 days, they have large effects on the development of material properties, creep and shrinkage behavior (Hubler et al., 2015); and, through differences in the developing micro-structure, also on the permeability and, thus, durability. Recently, the so-called gradient concrete characterized by a gradual change of density and strength throughout the cross-section is also gaining attraction. With the rapid progress in 3D printing, such a material may soon become reality (Herrmann and Sobek, 2016; Strieder et al., 2018), thus, improving durability, serviceability and sustainability alike.

Major potential for more durable and sustainable concrete structures can be expected following a successful conclusion of research on self-healing concrete. There are two main streams of activities focusing (i) on the ability of concrete to close cracks by itself in a moist environment without any special arrangement in the material design, and (ii) the introduction of special additives that accelerate the clogging of cracks (Edvardsen, 1999; van Tittelboom et al., 2013; di Luzio et al., 2018).

At the moment, there is a lack of systematic research regarding the aging behavior of modern concretes as well as the influence of admixtures on time-dependent processes. Furthermore, the characterization of ultra-high performance concretes (Wan et al., 2016) with substantially different material response poses an additional challenge that will become crucial for safely exploiting the advantages of the new materials. In general, the challenges associated with performance assessment and prediction require a more profound and systematic investigation of material properties and their changes over time (Wendner et al., 2015).

The design of new structures, considering the increasing number of imposed constraints stemming from sustainability, life cycle cost but also performance and functionality will remain a challenge. However, an even bigger challenge and the main task for the 21st century concerns the maintenance and rehabilitation of the built infrastructure. As sketched in Figure 1, necessary steps include (1) the identification of unknown material properties and boundary conditions (Strauss et al., 2012; Wendner and Strauss, 2015), (2) the reconstruction of the load history, (3) the formulation of stochastic models for the action and resistance side, (4) the performance assessment (Strauss, et al., 2009) and (5) the performance prediction, considering the ongoing deterioration (Strauss et al., 2013) and, potentially, future maintenance interventions (Bergmeister and Wendner, 2010). Both, an accurate assessment of structure’s current condition as well as a reliable prediction of its future condition, are key elements for the optimization of maintenance strategies, developing rehabilitation concepts and evaluation changes in use.

Modeling of Material and Structural Response

The life time performance of structural systems is determined by the performance of every single structural element and their interaction. Depending on the type of system and the involved materials, different mechanisms and deterioration processes are of relevance. Out of all the widely used construction materials, concrete is the only one that shows significant effects of aging caused by the slowly developing micro-structure through the deposition of calcium-silicate-hydrates (CSH). Concrete by itself is a complex composite material consisting of cement paste and aggregate. The addition of reinforcement bars, prestressing steels, and, more recently, fibers made from various materials further complicate the situation.

Concrete is a multi-scale material with distinctly different features and material characteristics depending on the scale of the problem (see Figure 3). On the scale of structures or structural members, concrete can be assumed to be homogenous and thus approximated on a continuum. On lower scales, the material characteristic length becomes comparable to the scale of the problem and the material’s heterogeneity can no longer be ignored. Going further down this mesoscale, the physical and chemical properties of the micro-structure of the interfaces, CSH phases, and pores dominate the behavior. Any structural problem can be solved numerically by using either an implicit or an explicit framework. Whereas the former’s advantages lies in the potential speed, convergence especially in the presence of distributed damage is not ensured. The latter overcomes this problem by a stepwise solution at the cost of significantly increased computation time and is the framework of choice for dynamic problems. A structure’s material is most commonly assumed to follow the theories of continuum mechanics and, thus, can be discretized by standard finite elements (FEM) where different element types depending on the element geometry, the form of the used shape functions, and the number of integration points can be differentiated. A refinement of the standard finite element method are the extended finite element methods (XFEM), as discussed, for example, in Moës et al. (1999), with the advantage to capture propagating cracks without the need for re-meshing. An alternative to discretization methods based on continuum mechanics are the discrete models in which the inherent heterogeneity of the material is directly used, for example, through particles that represent the coarse aggregates (see e.g. Cusatis et al. (2011a)). Answering problems associated with the long-term performance and especially durability but also sustainability of concrete structures requires the consideration of typically several scales in some form of a multi-scale framework. The application of multi-scaling concepts, either implicitly (e.g., local model for diffusion and corrosion that is mapped to a function of steel cross-section reduction for structural applications, see e.g. Strauss et al. (2013)) or explicitly through multi-scaling Vorel et al. (2012) becomes unavoidable. Among different multi-scale techniques, homogenization is a well-known method, widely used over the past decades. Eshelby (1958) and Hashin and Strikman (1963) were among the first to develop analytical homogenization techniques for the analysis of composite materials. Non-linear problems characterized by plasticity and strain hardening have been solved in the past decade through the so called computational homogenization (Miehe et al., 1999), in which the gauss point response is obtained by applying appropriate boundary conditions to a representative volume element (RVE). Another type of homogenization technique, called the Asymptotic Expansion Homogenization (AEH), uses the asymptotic expansion of a displacement field to build the homogenization framework Fish and Wagiman (1992). Unfortunately, the classical homogenization techniques can only take into account the effect of shape and the volume fraction of heterogeneity, but cannot capture the effect of the absolute size of heterogeneity. Nonlocal techniques such as Cosserat continua can overcome these difficulties (Feyel, 2003).

Figure 3

Length-scales of concrete as a multi–scale material according to Cusatis et al. (2014)

Abbildung 3. Längenskalen des hierarischen Materials Beton nach Cusatis et al. (2014)

Constitute Formulations

In recent years, a rapid progress in concrete technology has led to the development of many new construction materials with novel properties. These are, among others, ultrahigh performance concretes (UHPC) with strengths of up to 200 MPa, self-consolidating concretes (SCC) with improved rheology, fiber reinforced concretes (FRC) characterized by significantly increased ductility, and engineered cementitious composites with superior impact resistance.

The main characteristics of the tensile behavior of concrete and other quasi-brittle materials are cracking and strain softening (Wendner et al., 2015) – a loss of load carrying capacity with increasing deformation. Such behavior is typically described by non-linear fracture mechanics and suitable strain softening laws, characterized by the total fracture energy GF or, equivalently, by Hillerborg’s characteristic length (Hillerborg et al., 1976), lch = EGF / ft (E = Young’s modulus; ft = strength), which was derived based on Irwin’s approximation for the size of the plastic zone in ductile materials (Irwin, 1958). The most important effect of strain softening is the dependence of structural strength on structural size (Bažant and Planas, 1998)—any mathematical model for concrete that has any value must be able to simulate size-effect. Concrete behavior in compression is even more complicated: under low or no confinement, the compressive behavior features brittleness and strain-softening; for increasing confinement, however, the behavior transitions from strain-softening to strain-hardening and it is characterized by significant ductility. Under sufficient lateral confinement, concrete can reach strains over 100% without the loss of load carrying capacity and visible damage (Hilsdorf et al., 1973).

Over the years, many constitutive models have been developed to describe the behavior of concrete. They utilize the concepts of plasticity, damage mechanics and fracture mechanics and they are typically formulated in tensorial form by using the classical framework of continuum mechanics. Continuum constitutive equations can also be formulated in vectorial form through the microplane theory (Taylor, 1938), which has a number of advantages over tensorial formulations. Microplane models do not need to be formulated as functions of macroscopic stress and strain tensor invariants (Bažant and Oh, 1985) and the principle of frame indifference, however, is satisfied by using micro-planes that sample (without bias) all possible orientations in the three-dimensional space. The constitutive laws specified on the microplanes are activated by employing either the kinematic or the static constraints. The former defines the microplane strains as projections of the macroscopic strain tensor whereas the latter defines the microplane stresses as projections of the macroscopic stress tensor. Kinematically constrained formulations can be used with microplane constitutive laws exhibiting softening, and for this reason, they have been adopted for quasi-brittle materials such as concrete (Bažant and Ožbolt, 1992) even at early age (Di Luzio and Cusatis, 2013).

For continuum formulations, objectivity of the solution and independence of the numerical solution upon the finite element discretization have to be either inherent to the constitutive model, for example, as in the case of high order (De Borst et al., 2004) and nonlocal (Jirásek, 1998) models, or must be imposed using regularization techniques such as the crack band approach (Bažant and Oh, 1983). Methods that do not suffer from mesh sensitivity are also the ones accounting for strain softening through the insertion of cohesive discrete cracks (Hillerborg et al., 1976).

Another class of models often used to simulate quasi-brittle materials is based on lattice or particle formulations in which materials are discretized ’a priori’ according to an idealization of their internal structure. Particle size and size of the contact area among particles, for particle models, as well as lattice spacing and cross sectional area, for lattice models, equip these types of formulations with inherent characteristic lengths and they have the intrinsic ability of simulating the geometrical features of material internal structure. This allows the accurate simulation of damage initiation and crack propagation at various length scales at the cost, however, of increased computational costs.

Earlier attempts to formulate particle and lattice models for fracture are reported, for example, in Bažant et al. (1990). A comprehensive discrete formulation for concrete was finally proposed by Cusatis and coworkers (Cusatis et al., 2011a; 2011b) who formulated the so-called Lattice Discrete Particle Model (LDPM). LDPM was calibrated and validated against a large variety of loading conditions in both quasi-static and dynamic loading conditions, and it was demonstrated to possess superior predictive capability. All the above referenced constitutive formulations have advantages and disadvantages concerning their applicability, the number of input parameters, the ease of use, and finally, the computational cost. However, if chosen properly, they can by themselves or coupled in a suitable framework help to bridge the scales and solve many of the problems associated with predicting the short-term response of structural systems. Depending on the solver, dynamic analysis can also be performed, capturing the non-linear material response due to cracking, and thus predict changes in eigenfrequencies due to material damage. However, the coupling of the (non-linear) material models with physical models, describing, for example, diffusion, or chemical models, material hydration, increase in volume due to corrosion or alkali-silica-reaction, within the same framework is still an open research field. First successful attempts have been made (see e.g. Alnaggar et al., 2013 and Di Luzio and Cusatis, 2013). These and further scenarios relevant to the life-time performance of concrete structures will be discussed in a later section.

Aging

Concrete is a complex heterogenous aging composite material. The main characteristics of concrete and the source of many problems associated with describing its time-dependent response stem from the fact that the size of the heterogeneities are not negligible compared to the size of the structural element and the chemical reactions involved in its formation are complex and comparably slow. The former is one of the sources of the frequently discussed energetic size-effect as introduced by Bažant and Novák (2000). The latter causes the macroscopic phenomena creep and shrinkage as well as the slow evolution of mechanical properties.

It is generally impossible to define thermodynamic potentials when the material properties are considered as functions of time. This complicates the formulation of, for example, thermo-dynamically admissible visco-elastic creep functions. However, the problem can be overcome, as formulated by the solidification theory (Bažant et al., 1997), if the properties of a constituent, the hydrated cement gel (calcium silicate hydrates, CSH), are constant while the aging on the macro-scale results from an increase of the mass fraction (or concentration) of this constituent, as new hydration products are gradually attached to the pore surfaces and thus, stiffen the material (Wendner et al., 2015b).

It is well known that the main constituents of concrete are water, cement, and aggregates following a graduation curve. Additionally, admixtures and reactive additives are used to modify rheological properties. In recent years, cement replacement products such as slag and fly ash have found their way into the concrete mix design in order to: (a) decrease the required amount of cement, and thus, reduce the environmental impact, and (b) improve the mechanical properties strength.

Cement hydration is characterized by the reaction of free-water with unhydrated cement particles. According to the thermodynamics based model proposed by Ulm and Coussy (1995) and later revised by Cervera and Oliver (1999), the hydration kinetics can be described by postulating the existence of a Gibb’s free energy dependent on the external temperature T and the hydration extent χc. Knowing that the amount of reactants is finite, one can postulate an asymptotic reaction degree. The rate of hydration is governed by an Arrhenius-type equation with a hydration activation energy. The viscosity governing the diffusion of water though the layer of cement hydrates is an exponential function of the hydration extent. The same framework can be adopted for the silica fume reaction and other constituents of relevance, since the kinetics of pozzolanic reaction can also be assumed as a diffusion-controlled process. It is important to realize that all ongoing chemical reactions are coupled with each other through the concentration of reactants, the availability of water, and the energy balance. Consequently, moisture transport and heat transfer need to be solved as further coupled problems in a multi-physics framework. The overall moisture transport, including water in various phases: capillary water, water vapor, adsorbed water, and non-evaporable (chemically bound) water, can be described through the Fick’s law that expresses the flux of water mass per unit time J as a function of the spatial gradient of the relative humidity h (Di Luzio and Cusatis, 2009) with age-dependent sorption/desorption isotherms. The heat transfer in concrete is governed by the Fourier’s Law in combination with the enthalpy balance equation.

The local degree maturity then can be mapped to an evolution of mechanical properties as suggested by Di Luzio and Cusatis (2013) and Wan et al. (2016). The current engineering practice considers the stiffness evolution only for specific problems and only in rare cases accounts for the aging of strength parameters. Both organizations—American Concrete Institute and fib—provide aging functions that allow a modification of the standard 28-day-properties. While recommendations for the aging of Young’s modulus and compressive strength exist (ACI Committee 209, 2008; fib, 2013), no such provision is given for fracture energy and only crudely for tensile strength. The evolution of triaxial properties with time is still an open research topic, although relevant for many applications such as pre-stressing anchor heads.

Uncertainties and Material Randomness

In general, even a sophisticated model embedded within the most advanced simulation framework will not be able to exactly capture the real behavior of any engineering structure due to aleatory and epistemic uncertainties and limited knowledge. However, a properly calibrated model should be able to provide a good estimate. The amount of deviations between observation and simulation that have to be expected is dependent on the scatter in the applied load, the level of model refinement as well as the amount of uncertainty in the mechanical properties of the used materials (due to inherent heterogeneities). Further uncertainties arise from geometrical tolerances and variations in the construction procedure (Bergmeister, 1985). If the time dimension is added to the problem (as required for any life time prediction), the uncertainty associated with the predicted mean response increases significantly with the time span of extrapolation, in particular, if degradation processes and extreme events are to be considered.

Important processes influencing the long-term performance of concrete structures are apart from loads (which might change over time, e.g., due to changes in usage) and environmental impacts causing, for example, corrosion, the time-dependent processes creep, shrinkage and steel relaxation. Furthermore, deterioration processes associated with the corrosion of reinforcement, for example, due to carbonation or chloride ingress, as well as fatigue phenomena in concrete and steel influence the life cycle performance. Meaningful life cycle management of infrastructure is solely possible through the integration of stochastic degradation prognosis approaches that have to include appropriate monitoring concepts, as proposed by Budelmann et al. (2008) for steel corrosion. Ultimately, accurate prognosis models will allow for an optimized design of new but also an accurate assessment of existing systems, and thus facilitate efficient maintenance management over the full life time. In order to meet and ensure the safety requirements that are set by society, a deepened understanding, not only of mechanisms on a deterministic level, but especially of stochastic processes is required. These safety requirements are defined in EN 1990 (Eurocodes, 2002) with a failure probability of 10-6 per year for the ultimate limit state and 10−3 per year for the serviceability limit state for standard structures. For very important structures, the required failure probability may be elevated to 10-7 per year.

In reliability engineering, Gaussian distributions are frequently assumed for convenience. This assumption holds for geometrical properties or quantities for example that, following the central limit theorem, converge towards a Gaussian distribution. The second most popular choice is the Log-Normal distribution, which still has favorable characteristics, among them the non-negativity. While for loads, the distribution type is typically clear and in many cases given by members of the family of extreme value distributions, the situation for resistance quantities is still a topic open to discussion. In 1997, the Joint Committee for Structural Safety issued the first probabilistic model code with the intention to provide guidance for fully probabilistic analyses and reliability engineering (Vrouwenvelder, 1997). In this document, a Log-Normal distribution is suggested for modeling the strength of concrete. However, Bažant and co-workers (Le et al., 2011; Salviato and Bažant, 2014) argue based on multi-scaling and theoretical considerations that strength should be modeled as a Gaussian distribution with a Weibullian tail, where the crafting point is dependent on size. This suggestion has been adopted by many researchers, especially in the stochastic mechanics community.

Recorded extremes (i.e., the maximum and minimum values) of physical quantities are of special interest for performance assessment and prediction (Ang and Tang, 2007). The term “extreme value” generally refers to the largest or smallest value (whatever case governs) that was observed within a sample of size n. This sample represents a part of a basic population with a given distribution type. The exact distribution of the largest or smallest value converges with increasing sample size n→∞ to the so-called asymptotic distribution. Gumbel (1959) investigated this phenomenon and proposed three types of such asymptotic distributions depending on the tail behavior of the initial PDFs: the double exponential (type I, Gumbel), the single exponential (type II, Fréchet), and the exponential form with an upper (or lower) bound (type III, Weibull). These extreme value distributions are not only the basis for many load models (e.g., for temperature, wind, snow, traffic) as specified by design codes, but also play an important role in modeling resistance quantities as amplified by the Weibull theory of strength (Weibull, 1951).

Furthermore, extreme value distributions allow the incorporation of monitoring and measurement information in the reliability analysis and performance assessment either directly through the derivation of stochastic models as proposed in Wendner and Strauss (2015) or through updating concepts such as Bayesian Inference (Beck and Katafygiiotis, 1998).

A further question of debate is the proper description of stochastic dependence. The significant effect of dependence, so far an underestimated issue, has been recently highlighted by Dutfoy and Lebrun (2009). Particularly in the area of low target failure probabilities, this observed effect becomes dominant as the number of the variables increases. Nevertheless, standards and engineering societies provide no guidance. Typically, the joint-probability distribution has to be approximated by incomplete information, given in the form of marginal distributions and pairwise linear correlation coefficients. The latter can be assembled to a correlation matrix, which however does not necessarily satisfy the condition of positive-definiteness (Vořechovský, 2004). For concrete, authors have suggested such correlation matrices (Strauss et al., 2009) that approximate the statistical dependence between the main concrete material properties—compressive strength, tensile strength, Young’s modulus, and fracture energy.

An additional topic of interest is the uncertainty in model predictions that stems from the material heterogeneity of concrete that is reflected by a spatial variability of material properties. While for large structures, local fluctuations of strength are of little significance and do not affect the overall structural response; this is not true for structures that are in the same order of magnitude as the size of the heterogeneity. With decreasing size of the structural member, the probability that a weak spot causes a “premature” failure before the maximum stress distribution (e.g., in midspan) reaches the local strength increases. This causes the so-called energetic-statistical size effect (Bažant and Novák, 2000) which cannot be captured numerically, unless the spatial variability is also modeled. Random fields provide the means to impose spatial variability in the framework of finite elements or discrete elements (Ostoja-Starzewski, 1998). So far, mostly the Gaussian random fields without imposed cross-correlation or perfect cross-correlation have been employed. While the concept of random fields is consistent with the framework of continuum models their application to discrete models without consideration of the modeled meso-structure is questionable. Furthermore, the correct way of determining the autocorrelation length depending on a given concrete composition is still an open question, as is the type of random field.

Identification

Identification tools are used in many areas of civil and structural engineering and provide essential information for the analysis of existing and future systems and structures, and thus, also the optimization of the same. In general, the identification tools are mainly characterized by the type of input information they use (Petryna, 2004; Strauss et al., 2009), and if they can directly determine the quantity of interest, or only indirectly through model updating. The most general classification differentiates in global and local methods. While the local identification tools can provide detailed information concerning the immediate vicinity of the observed quantity, they cannot be used to determine the overall structural quantities such as the boundary conditions. Typical global identification tools use modal characteristics like eigenfrequencies, eigenmodes or damping parameters (Mazurek and De Wolf, 1990; Wendner, 2009). Alternatively, quasi-static characteristics such as influence lines can also be used (see e.g. Hoffmann, 2008).

In a mathematical sense, any identification problem can be formulated as an optimization problem where the objective function (the error between the observed response and model prediction) has to be minimized, subject to equalities and inequalities. In many cases, the structural identification problems result in (highly) nonlinear frequently ill-conditioned optimization problems, complicated by the existence of local minima. The inverse analysis of creep and shrinkage measurements with the goal to extract an improved prediction model is discussed by Wendner et al. (2015a). The identification of damage expressed as a local loss in effective bending stiffness was investigated by Hoffmann et al. (2007; 2009) on the laboratory scale as well as in a field test. The identification of boundary conditions, temperature loads and the effects of earth pressure were investigated on a jointless bridge (Strauss et al., 2012; Wendner and Strauss, 2015).

Optimization

In general, optimization problems are classified as linear or non-linear, as well as constrained and unconstrained. Linear problems are characterized by a linear dependence between optimization variable and objective function, as well as boundary conditions. Many practical problems can be linearized with sufficient accuracy. A comprehensive summary of optimization problems is given in Luenberger (1989). Contrary to linear programming, non-linear problems can only be solved iteratively. Furthermore, for many practically relevant problems, the existence of local minima complicate matters.

The standard form of optimization problem is given by an objective function f(x) quantifying the optimization problem that is subject to a set of i = 1,...,n inequality constraints gi(x) ≤ 0 and j = 1,...,m equality constraints hj(x) = 0. Optimization problems are typically defined as minimization problems. (Any maximization problem can be converted into a minimization of the negative objective function.)

Many practical problems lead to multi-objective optimization problems, in which a set of objective functions (in the simplest case two) has to be optimized concurrently, subject to the same equalities and inequalities. Out of the infinite amount of solutions that satisfy the equalities and inequalities, there is an infinite amount of possible solutions—so called pareto-optimal solutions that are neither inefficient (inside the admissible domain) nor impossible (outside the domain) solutions. Without additional information, it is impossible to select the best solution. A classic example is the structural optimization with the objectives of: (a) minimizing material, and (b) maximization capacity. Multi-objective optimization problems can be converted into standard optimization problems by introducing weighting functions, frequently expressed in terms of costs. The objective function that has to be minimized in the course of structural inverse analyses and identification problems typically lacks convexity and local minima can exist. Also, it is not “smooth”, and the minimization problem may be mathematically ill-posed. Nevertheless, finding an absolute minimum is made possible by the trust region algorithm, genetic algorithms, simulated annealing, and particle algorithms. Further approaches include, among others, the “response surface method” and artificial neural networks. The latter implicitly captures the relationship between the observed input and output quantities by optimizing a set of weights during the training phase and have been successfully applied for inverse identification problems, even in cases where no clear functional relationship between inputs and response could be formulated.

Testing and Monitoring

A crucial element of model development and also application is the availability of suitable input data, characterizing the material or structural response. Without unbiased and reliable measurement data model development, calibration, validation and, ultimately, application is impossible.

Generally, a distinction between the terms “testing” and “monitoring” is made. The former typically refers to timediscrete measurements, often in a laboratory environment, whereas the latter often describes continuous in-situ observations by more or less automated systems. Testing comprises the areas of “non-destructive testing” and “destructive testing”. Monitoring systems on the other hand are characterized by the type of measurement that is taken, the intended function of the monitoring system, and the planned duration of the campaign (Bergmeister and Wendner, 2010). Tests can be classified based on a combination of the following criteria: destructiveness of the measurement, duration, sample size, dynamic versus quasi-static, real-time versus accelerated test, and specimen size ranging from material level to full scale structural tests.

Destructive Testing

As implied by the term, the investigated specimen or structural component is destroyed in the course of a test that aims at investigating the material or structural behavior up to failure, usually with special interest in the failure mechanisms and failure loads. In order to characterize concrete, typically, confined or unconfined compression tests on cubes and cylinders yielding a respective compressive strength are performed, complemented by measurements of the elastic parameters. The macroscopic material strength in the tensile domain (as a structural property) is further determined by direct or, more frequently, indirect tension tests (Wendner et al., 2015). The latter comprises 3-point-bending tests, and splitting tests. Furthermore, for a full fracture mechanical characterization, the initial fracture energy Gf or total fracture energy GF are required. Both can be determined, for example, by the work-of-fracture method using notched 3-point-bending test or wedge-splitting test data. More recently, the determination of fracture energy from peak loads of geometrically scaled specimens of various sizes (the so called size effect study) was proposed by Bažant and co-workers (Hoover et al., 2013). This removes the challenge of recording post-peak response.

The industry standards for quasi-static load tests are closed loop servo-hydraulic load frames or electro-mechanical load frames, depending on the main application. In addition to piston stroke and force, displacements and strains at crucial parts of the system are measured, yielding load displacement diagrams. Although destructive tests are paramount for the calibration and verification of simulation techniques, only in combination with suitable data-acquisition systems and instrumentation, an investigation of actual failure mechanisms and the determination of unbiased material properties becomes possible. A compression test, where only piston stroke and force are recorded, may be used to determine the material strength but cannot provide insight into the deformation behavior.

A particular challenge while testing quasi-brittle materials is associated with obtaining post-peak softening response for different types of tests and, ideally, various sizes. This information is quintessential for calibrating the model parameters that control softening in tension, shear-tension, or compression under low confinement. The ability to control a specimen in the softening regime is a stability problem. It can be shown that under displacement control, the equilibrium path remains stable as long as no point with vertical slope is reached. This limit state is called “snap-down” and represents the transition to a “snap-back” instability, which is characterized by an equilibrium path with global energy release. In the softening regime, parts of the system (load frame as well as specimen) are elastically unloading, releasing energy. While the energy release within the specimen is out of the control of the experimenter (and can also be observed in numerical simulations), the energy released by the unloading experimental setup and load frame is a parameter that can be controlled. The following statement is generally true: the higher the stiffness of the load-frame, the smaller the risk of snap-back instabilities. In general, a stable control in the post-peak regime is possible if and only if the control quantity is monotonously increasing. This is the case for piston stroke if the compliance of the load-frame is smaller than the softening slope of the specimen. True crack mouth opening displacement (CMOD) control in fracture tests or circumferential expansion control in unconfined compression tests are stable. Crack initiation specimens such as unnotched beams or splitting prisms can be controlled by average strain, or, in general, by relative displacements between given points on a specimen that include the forming macro-crack. In the latter case, the appropriate gage length is determined by two contradictory requirements. The amount of elastically unloading material within the monitored distance has to be minimized, while the gage length must be sufficiently large to contain the forming crack with high likelihood (Wendner et al., 2015).

Non-destructive Testing (NDT)

Contrary to destructive tests, in non-destructive tests, the specimen or structural component remains undamaged (Malhotra and Carino, 2003). These are suitable experimental techniques for in-situ tests on real structure, which must not be damaged. For laboratory applications, nondestructive tests can provide insights into the effects of ongoing (deterioration) processes such as corrosion or fatigue. In the latter case, typically accelerated tests are performed. Many different techniques for non-destructive testing exist, as summarized by Bergmeister and Santa (2004).

The acoustic emission (AE) testing method is based on the principle that fracture and other mechanical phenomena create acoustic waves (Wenig et al., 1992). The AE test method detects, locates, identifies and displays flaw data for the stressed object. During impact-echo tests, waves are created externally and transferred to the specimen. Depending on the analysis technique, the run time between sent wave and reflected wave, phase-differences or even nonlinear effects are evaluated to detect and quantify damage (Sansalone and Streett, 1997).

Thermographic methods are used to detect the flow of heat on a tested specimen. Then heat can be associated with other forms of energy, such as dissipation energy emitted by the formation and propagation of a crack. Furthermore, a pre-warming of specimens (e.g., placing in water baths with elevated temperatures) can allow for the detection of thermal losses at discontinuities (cracks) during the loading process. Basic instrumentation for this form of NDT is a thermal infrared camera. It can detect internal voids, areas with delamination, and cracks in concrete structures. In order to study the internal structure of specimens, X-ray and CT scans have been performed. Although useful, these tests are expensive and limited to small scale investigations due to the required energy (Wegner, 1998). Digital Image Correlation (Peters and Ranson, 1982) and Electronic Speckle Pattern Interferometry have a significantly wider range of applicability. However, measurements are limited to the surface only. The Electronic Speckle Pattern Interferometry (Ettemeyer, 1988) uses laser technology and video cameras. It is often used as a tool for the identification of mechanical properties of materials. It is a non-contact and non-intrusive method. The main idea is that an optically rough surface is illuminated by an expanded laser beam and filmed by a CCD camera. Through interference, a speckle pattern is generated, which changes if the object is displaced or deformed. The resolution of such systems solely depends on the wave-length of the laser beam. Regardless of the size of the specimen, a resolution around 10−4 mm can be reached. However, this experimental technique is quite susceptible to ambient vibrations. The so called Digital Image Correlation (DIC) technology was first developed by researchers at the University of South Carolina in the 1980s (Peters and Ranson, 1982) and significantly improved since. One advantage of the DIC technology lies in its insensitivity towards vibrations in the setup. However, the resolution depends on the field of view and the lens system employed.

Similar to structural scale applications, modal characteristics can also serve for non-destructive tests in laboratory environments. Classical examples are the determination of eigenfrequencies, eigenmodes, and damping using scales such as accelerometers or laser vibrometers. These techniques are discussed further in the subsequent chapter.

Monitoring

There are many definitions of the term monitoring, depending on the field. In structural engineering, the term generally refers to any form of planned continuous surveillance of a structure’s behavior. Monitoring starts with the development of a suitable concept, may include automatic alarm and control scenarios and provides stake-holders with the necessary data to base decisions on (Bergmeister and Santa, 2004; Bergmeister and Wendner, 2010).

Monitoring tasks include but are not limited to: monitoring to determine material and structural properties by direct or inverse analyses; monitoring as an element of regular inspection; monitoring during operation to ensure safety and serviceability; monitoring to update long-term predictions; monitoring in case of questionable condition, coupled with alarms.

Depending on the monitoring task and the type of structure, a suitable monitoring concept has to be developed. Differences lie in the extent and frequency of measurements/surveillance, which can range from continuous surveillance with high sampling rates to time-discrete periodic measurement at extreme events or in the course of regular inspections. Consequently, the sensor equipment can be permanently installed and operated by automated acquisition systems, triggered by certain events or even be mobile solutions that are manually installed and only temporarily operated.

The duration of measurements in combination with the monitoring task ultimately decides the choice of sensor system. Key features are related to energy consumption during operation, sustainability, accuracy, resolution, longterm stability, susceptibility to electromagnetic interference, durability in aggressive environments, requirement for data transmission (cables, wireless), among others. In many cases, opposing requirements have to be fulfilled. The need to capture traffic loads associated with quickly passing vehicles requires high sampling rates which, however, cause large amount of data that cannot be managed for long-term tasks. Consequently, optimized yet unbiased data reduction and pre-processing techniques have to be employed.

Reliability Assessment

In general, the determination of a structural system’s safety level includes the formulation of limit state equations (defining the failure domain; the region of acceptable behavior) and stochastic models for all input variables. Limit state equations compare a certain action quantity S with an admissible limit, typically denoted by resistance R, where the probability

of S exceeding R (equivalent to a negative safety margin M = R − S) is denoted as failure probability pf = , which is limited by a society’s safety demand.

Limit state equations can be formulated for the ultimate capacity in order to ensure a structure’s safety and avoid loss of life as well as economic losses. Additionally, limit states for serviceability aspects as well as durability are formulated in the course of fully probabilistic design or assessment. Partial safety factors as specified in the codes for different load combinations are derived from these limit state equations. Typically, society demands a safety level with regard to collapse of pf,req,ULS = 10-6 per year and with regard to violations of functionality and serviceability of pf,req,SLS = 10 -3 per year, depending on the consequences of failure. For convenience, failure probabilities and hence safety levels are typically expressed in terms of an equivalent safety index β, assuming a Gaussian distribution. The probability of failure, thus, reads pf = Φ(β) and strictly holds if and only if both R and S follow a Gaussian distribution. In this case also, the safety margin Z is normally distributed and the safety index can be defined as β = μM/σM. This definition traces back to Basler (1961) and Cornell (1969) and has become common practice; even in situations in which the underlying assumption does not hold, failure probabilities are expressed by an equivalent safety index β.

Technical Life Cycle and Safety Demand

It is important to note that the required safety demand depends on the type of structure or structural element, the consequences associated with failure, and the expected remaining life-time. Considering a range of design life-times between 20, typically 50, and up to 200 years, the safety demands for new and especially existing structures can vary widely. The actually required life-time of a structural element does not necessarily coincide with the design lifetime of the overall structural system, which by itself may vary, depending on the type of structure. On the structural level, Eurocode 0 (Eurocodes, 2002) allows for an adaptation of the demanded safety level depending on the expected consequences of failure or malfunction, as well as the design life-time by prescribing annual safety levels depending on the consequence classes.

Approximation Techniques

Reliability levels in general and structural reliability in particular can only be approximated and, solely in rare cases, be determined in closed form. Available approximation techniques comprise the analytical First (FORM) and Second (SORM) Order Reliability Methods (Schneider, 1996) in which the limit state equation is approximated by a linear (FORM) or quadratic function (SORM) in the design point. In many cases, even for nonlinear conditions, an excellent approximation can be reached using the FORM reliability analysis.

Sampling Techniques

An alternative to approximation techniques are sampling techniques, ranging from “crude” Monte Carlo Sampling, over various types of stratified sampling to Response Surface Method (Iman et al., 1981). In a classical Monte Carlo simulation, a limited number of samples is generated and evaluated with respect to the formulated limit state conditions. Unfortunately, the reliability levels of interest are extremely small (pf < 10-6), leading to a very high number of required simulations. This is typically not feasible for nonlinear problems and the application of finite element or discrete element models. In case time-dependent re-liability levels or dynamic analyses in an explicit frame work are required, the computational costs become prohibitive. One possible solution are stratified sampling techniques such as Latin Hypercube Sampling (LHS) according to (Mckay et al., 2000), which proved to allow for satisfactory estimations of the reliability level using a low number of realizations (Strauss et al., 2009)—typically as low as 30 to 50.

After evaluating all generated samples, failure probabilities can be determined by the classical probability definition pf = nobs/n based on the number of observed events nobs out of the total number of events n. Alternatively, the failure probability can be approximated by the statistical moments of the safety margin M according to the definition by Basler-Cornell or by integrating the fitted tail of the probability distribution fM(x). Recently, authors have also suggested the use of stochastic finite elements (Matthies and Keese, 2005). Although this approach does not require multiple evaluations of nonlinear finite elements, the stochastic finite elements themselves are highly computationally expensive, in the end nullifying any potential advantage. Furthermore, any use of existing numerical frameworks or computer models would be impossible.

Statistical Dependence

A central problem in Reliability Analysis is the simulation of the stochastic response of structural systems. The influence parameters of resistance and load are uncertain to some degree and typically expressed as probabilistic, possibilistic or fuzzy quantities. Moreover, information on functional or statistical association among these variables may be available. In these cases, generating correlated inputs required for the simulation of resistance, failure probabilities or lifetimes can be demanding. The significant effect of dependence, so far an underestimated issue, has been recently highlighted by Dutfoy and Lebrun (2009).

Two main approaches to tackle the problem of generating correlated variables are being employed. The first, was developed by Iman and Conover (1982). This method was further refined by implementing importance sampling (e.g., Latin Hypercube Sampling) and optimization (simulated annealing) techniques (Vořechovský and Novák, 2009). The second approach involves the use of copula functions (Embrechts et al., 2003). Copulas are able to represent any type of dependence structure, and remain invariant under monotone transformations of the variables.

Performance Prediction and Asset Management

The basis of performance assessment, performance-based design and the partial safety factor concept are the concepts of reliability theory, the definition of limit states, and, generally, the framework for fully probabilistic analysis. The requirements associated with life cycle design, assessing the remaining life-time of existing structures, and optimizing maintenance and rehabilitation requires an extension of the framework and concepts in time, considering changes in loads, material properties and deterioration processes.

Modeling Time-Dependent Reliability

Structural responses obtained from randomized nonlinear FEM analysis, together with sophisticated degradation models and adequate stochastic models, are the basis for a rational life time analysis of structural systems subjected to degradation. The reliability analysis, which is a part of the life time analysis, has to be based on computationally efficient approximation or sampling techniques in order to be able to handle the demanding nonlinear problems in time. The current state of research approach is based on the advanced Monte Carlo Sampling techniques, such as LHS. According to Strauss et al. (2013) and Wendner et al. (2010), the required procedure can be summarized by the following steps: (i) development of mechanical model for structural response and aging/deterioration effects; (ii) formulation of stochastic models for all input quantities, including their statistical dependence; (iii) generation of a set of n realizations for all random variables in the analysis and m relevant points in time, as shown by Latin Hypercube Sampling (LHS); (iv) independent analysis of all n mechanical problems for all m points in time; (v) statistical evaluation of response quantities; (vi) reliability and life cycle performance assessment based on the PDFs of actions and the obtained empirical CDFs of structural response for any given point in time; (vii) sensitivity analyses to reveal the relation between input quantities and structural response in time.

Life Cycle Cost (LCC) and Life Cycle Assessment (LCA)

Monitoring, inverse analysis, structural analysis, and (time-dependent) reliability assessment methods are key elements in optimized design and maintenance planning. LCC and LCA represent approaches to quantify and optimize the life cycle performance of alternative solutions that, thus, provide the stake-holder with decision support tools. The former have been investigated by Val and Stewart (2003), Furuta et al. (2003) and Frangopol and Messervey (2009). The latter aims at quantifying the environmental impact in terms of energy consumption, use of natural resources, and CO2 emissions of a structure or project from the planning phase to demolition (Malmqvist et al., 2011).

Performance Indicators

In general, no model will be able to fully capture the real behavior of structures due to aleatory and epistemic uncertainties and weak knowledge. While the former represents an inherent statistical uncertainty that cannot be avoided, the latter covers the model uncertainty, that is, the systemic uncertainty that, in principle, could be known (Ang and Tang, 2007). The typical goal of uncertainty quantification is a reduction of the epistemic uncertainty and transformation to an aleatoric uncertainty. Assuming a perfect model, aleatoric uncertainties can be quantified in a straightforward approach, for example, by Monte Carlo sampling.

It is obvious to define the so called “performance indicators” on the basis of the response quantities that are most sensitive to changes in structural performance. These performance indicators are typically either directly monitored response quantities or derived values. Deflections or strains can characterize changes in the capacity or indicate serviceability problems while the electrical potential, the pH-value or the concrete permeability can serve for quantifying the corrosion resistance and durability. Further derived values such as mean value correction factors (Strauss et al., 2012), reliability level, redundancy, or risk are also suitable performance indicators (Zhu and Frangopol, 2012).

Performance indicators are essential quantities as they are the basis for the performance assessment of old systems, the continuous performance surveillance of existing systems, and their maintenance optimization. Performance indicators are also paramount for the optimization of new structures. The performance indicator is a tool by which the safety and integrity of structural systems is ensured, as early warning is provided in case predefined thresholds are exceeded by the performance indicators. Needless to say, a performance indicator used for the maintenance optimization process has to be able to accurately represent a specific type of performance and predict its future variations. Frangopol and Okasha (2008) have identified numerous commonly encountered and used life cycle performance measures.

In a first approach, performance indicators can be derived from sensor information or visual inspection methods with respect to code specific limit states. The sensor or inspection information are treated as extreme value probability density functions using Bayesian updating approaches, in order to support prediction models.

The obtained performance indicators associated with the service ability and the ultimate limit state of the structures will be gathered in a performance matrix. This matrix provides a comprehensive performance assessment quantity in time and can be used as short term prior for prediction models and probability of availability functions (Okasha and Frangopol, 2010). In a second approach, performance indicators are derived from structural quantities (e.g., stress or strains) in the sensor locations that are updated by the sensor information. The updated structural quantities (sensor information as extreme value probability density functions) yield reliability indices or performance indicators with respect to the code specific limit states.

Performance monitoring must be a carefully planned activity, which generally has to cover the following steps: definition of the objective of the monitoring task; selecting the monitoring method and the locations corresponding to the performance indicators; execution of the monitoring activity (Zilch et al., 2009); evaluation and interpretation of data; adjustment of processes based on the monitoring data (model-updating); performance assessment and extrapolation (Gul and Catbas, 2008); decisions about a possible intervention such as maintenance or repair, among others (Aktan et al., 2000).

Deterioration Processes for Performance Prediction

The life cycle performance and life-time of (concrete) structural systems is influenced by many time-dependent processes that alter the mechanical characteristics of the involved materials as well as the structural response. These processes partly interact and thus decrease or increase their effect. Furthermore, the effects of extreme events, of sustained loads but also cyclic loads on the remaining service life need to be accounted for. Environmental influences that need to be considered are carbonation, (chloride induced) corrosion of reinforcement steel, fatigue, frost-thaw cycles, and chemical attack by aggressive fluids and fumes. Any time-dependent analyses needs to account for the aging visco-elastic behavior of concrete and the visco-plastic response of steel, typically described by creep function Φ for concrete and relaxation function for steel. Extreme events such as high temperatures (fire, for example, in tunnels), earthquakes, and impact loads are relevant time-discrete load situations that nevertheless influence the ongoing deterioration and aging processes. Naturally, not all processes are acting at the same time or have, for a given practical application, the same significance. However, ultimately a realistic prediction of a structure’s service life must be based on full simulation of all the relevant phenomena in time, and thus, be able to capture each process by itself as well as all significant interactions.

Concrete creep, Shrinkage

For many concrete structures, the time-dependent processes creep and shrinkage are quintessential for ensuring both safety and serviceability in the course of their service life. Creep is defined as a deformation under sustained load, shrinkage as shortening due to water-loss and relaxation as reduction of stresses under given deformation.

In high-rise buildings, concrete creep may cause non-negligible relative deformations between columns of different sizes and the concrete core. In mass structures, shrinkage can lead to significant cracking in statically indeterminate frames to the development of constraint loads, and in thin webs of pre-stressed girders, the shrinkage associated shortening may cause an additional noticeable prestress loss and thus contribute to potentially excessive deformations. Naturally, the relaxation of the pre-stressing steel itself has to be accounted for.

It is clear that in long-span pre-stressed bridges, the effects of shrinkage are exceeded by the combined effect of concrete creep and steel relaxation. Achieving sustainability of our built infrastructure requires that the time-dependent deformation behavior is accurately captured. Typically, creep deformations are split into reversible and non-reversible parts associated with the elastic, the aging visco-elastic and the non-aging visco-elastic behavior (Bažant and Baweja, 2000).

The degree of sensitivity of various structures to creep and shrinkage varies widely. A sophisticated model is necessary only for certain types of structures. The following approximate classification of sensitivity levels of structures may be made on the basis of general experience (Bažant et al., 2015) ranging from structures that do not require a specific analysis of creep effects up to the structures that are highly susceptible such as record span bridges, nuclear containments and vessels, large offshore structures, large cooling towers, thin roof shells, and super-tall buildings. The latter require the most realistic and accurate analysis—typically a step-by-step computer analysis based on a rate-type constitutive law and damage constitutive model (Yu et al., 2012), coupled with the solution of the differential equations for drying and heat conduction, and updating based on short-time tests of the given concrete. The error in maximum deflections, stresses and cracking predictions caused by replacing a realistic analysis with a simple but simplistic estimation of creep and shrinkage effects is often larger than the gain from replacing old-fashioned frame analysis by pencil with finite element analysis by computer. A detailed sensitivity study of creep and steel relaxation parameters regarding multi-decade deformations can be found in (Wendner et al., 2015). For lower level sensitivity, the analysis based on the age-adjusted effective modulus method is sufficient (Bažant, 1972) as endorsed by ACI and fib. This is recommended for preliminary design estimates of creep sensitive structures and standard structures. In the service stress range (up to 0.40fc), a linear dependence of creep strain on stress may be assumed as an acceptable approximation. This means that, for constant uniaxial stress σ applied at age t’, the strain evolution is given as a sum of creep deformations σJ(t,t’), drying shrinkage εsh(t, t0 ), autogenous shrinkage εau(t), and thermal strains αTΔT(t) with t0 = the start of drying. The compliance function J(t,t’) is generally split into elastic or instantaneous part, basic creep without moisture exchange, and drying creep.

The reported differences between tensile and compressive creep are likely caused by cracking damage. The damage should properly be accounted for in a model of rate-type form and is nowadays routine in finite element computations.

As defined above, creep at constant stress and in constant environment is characterized by the compliance function J(t, t’), which represents the strain at time t caused by a unit sustained uniaxial stress applied at age t’. Generalization for time-variable stress σ(t) is obtained by applying the principle of superposition in time, which yields a linear viscoelastic stress-strain relation in the form of a Volterra integral equation with a kernel, which is not of convolution type because of chemical aging. However, when a rate-type creep law is used, the structural creep problem can be reduced to a system of first-order ordinary differential equations in time with age-dependent coefficients. The aging aspect commonly is based on the microprestress-solidification theory (MPS) as introduced by Bažant et al., 1997, the drying shrinkage term must capture the size effect and asymptotic properties based on the diffusion theory of moisture transport. The aging nature of concrete creep is reflected by the solidification theory, which relates the aging creep behavior of concrete to the non-aging properties of the cement paste and the amount of formed cement gel already formed. After the hydration ceases, the multi-year and multi-decade aging is, in the microprestress theory, fundamentally explained by the relaxation of the tensile microprestress, which balances the disjoining pressures in nanopores and facilitates the shear ruptures of interatomic bonds responsible for creep.

Existing concrete creep models comprise, among others, the RILEM model B3 (Bažant and Baweja, 2000) and the new RILEM recommendation B4 (Bažant et al., 2015)—both based on MPS—the ACI model 209 (ACI Committee 209, 2008), the fib Model Code 2010 model (fib, 2013) and the Gardner model (Gardner and Lockman, 2001). While B3 and B4 can be directly converted into a rate-type form, the others must be transformed using Laplace transform inversion. Widder’s approximate inversion formula is an effective solution for this purpose (Yu et al., 2012).

Carbonation

Carbonation is a well-known process changing the characteristics of concrete over time, caused by the carbon dioxide present in the environment. Starting at the concrete surface, a carbonation front is entering any concrete object and causes ultimately a drop of the pH value to roughly 8.3. When the carbonation front reaches the level of the rebar or any other piece of metal embedded in the concrete such as an anchor, the steel elements are depassivated and corrosion commences, provided that enough oxygen and humidity is present. The rate of carbonation is dependent on many parameters, among others, the permeability of the concrete, the environmental temperature, the relative humidity and the carbon dioxide concentration. Carbonation is accompanied by a change in mechanical characteristics such as a noticeable increase in compressive strength in the affected volume close to the surface through the formation of calcium carbonate (CaCO3). In parallel, the density increases, which causes a deceleration of the carbonation rate.

Steel Relaxation

The stress relaxation in steel is a manifestation of visco-plasticity, a phenomenon systematically studied and well understood for metals and alloys at high temperatures (Jirásek and Bažant, 2002). The visco-plastic strain is independent of the stress or strain history (i.e., there is no memory). In its simplest form, with no internal friction, the constitutive behavior can be described by a Bingham model.

For design practice, simplified models for steel relaxation have been proposed by fib (fib, 2013). However, ACI does not provide a recommendation. Instead, in American practice, an approximate empirical formula is used according to Magura et al. (1964).

Both models have been formulated based on the practice of measuring the stress relaxation of prestressing steel tendons at constant strain and constant temperature. The resulting simple formulas are then used directly in the calculation of prestress losses. However, this approach is contingent on the assumption that strain changes during the structural life-time are negligible compared to the initial strain in steel. Furthermore, temperature changes are assumed to be unimportant. However, Bažant et al. (2012a, 2012b) and Bažant and Yu (2013) recently showed that in creep-sensitive structures, such as large-span segmentally erected box girders, the strain changes in concrete are not negligible, and the temperature rise in concrete slabs exposed to the sun may be important.

Steel Corrosion

Generally, the dominant process governing the degradation of reinforced or pre-stressed concrete structures is the corrosion of reinforcement steel or other constructive steel components. It’s development in time is traditionally defined by initiation ti and propagation periods tp, see Tuutti (1982), where the former denotes the period up to the possible initiation of corrosion, that is, the time from concrete casting to the moment when the reinforcement is no longer passivated, such as due to the effect of chlorides. In general, steel within concrete is protected against corrosion through a thin film of oxidation products, which form due to the high alkalinity of the surrounding concrete. Corrosion commences as soon as the pH-value drops below roughly 9.0, caused by carbonation, chloride ingress, or physical damage. The quantity controlling the progress of corrosion is the so-called corrosion rate. This variable is highly influenced by environmental conditions such as relative humidity in the surrounding air, temperature, available oxygen, humidity near the steel, degree of carbonation, and chloride concentration. It is obvious that the corrosion rate is influenced by the diffusivity of the concrete, which is governed by the water-cement-ratio as well as cracking (Schiessel and Raupach, 1997).

Chloride Ingress

Chlorides can be present in the concrete itself (e.g., part of the used aggregates) or introduced from the environment. Typical sources are deicing salt or maritime climates. Chemically, chlorides relevant for concrete deterioration are NaCl and CaCl2. There are two governing transport mechanisms, diffusion and capillary suction. While in the first case, dissolved chloride ions are slowly transported (determined by the concentration gradient); in the latter case, the intermittent change between wet and dry cycles cause an accelerated chloride ingress during rewetting (Papadakis et al., 1996). However, not all dissolved chlorides are mobile and ultimately lead to the initiation/acceleration of steel corrosion. A certain percentage chemically reacts with calcium aluminates and hydration products. Additionally, chlorides can be physically adsorbed by the hardening cement paste or aggregates. Danger emanates only from free chloride ions which cause a decrease in the pH-value, an increase in the solvability of Ca(OH)2 and consequently also an increase in the electrical conductivity (Papadakis et al., 1996).

Depassivation of steel is ultimately reached when the molar concentration of dissolved chloride ions Cl- near the steel member (e.g., rebar) reaches a certain percentage of the molar concentration of hydroxide ions OH-.

Freeze-Thaw Cycles

A concrete’s resistance against freeze is a material property, mainly governed by the pore structure (water-cement ratio) but also environmental conditions (presence of chlorides, frequency and amplitude of freeze-thaw cycles). Repeated freezing gradually leads to a decrease in mechanical properties (e.g., strength, modulus) and superficial damage. The reason for this is the volumetric expansion of capillary water by 9%, which causes the development of micro cracks. The gradual decrease in mechanical properties is typically described by the relative decrease in dynamic modulus. Many theories provide explanations for this behavior based on measurements such as hydraulic pressure, osmotic pressure, development of micro-ice-lenses, a critical saturation in freezing concrete or thermodynamics (Bager and Jacobsen, 1999).

High Temperature and Fire

In recent years, there have been extensive investigations regarding the fire safety of concrete engineering structures such as tunnels (Mörth, 2005), bridges, high rise buildings, power plants and airport runways (Hironaka and Malvar, 1998), which might be exposed to temperatures above 150◦C. Damage and deterioration mechanisms can be characterized either by a gradual material degradation due to physical and chemical processes, or sudden explosive spalling due to extreme pressures of evaporating constituents.

Fatigue

For many structures, such as aircraft, ships, bridges but also fastening systems, the life-time fatigue is an essential aspect of design. However, when a long lifetime is required, it is next to impossible to obtain the lifetime histogram purely experimentally, thus, theoretical derivations are required (Le and Bažant, 2011).

The concept of fatigue lifetime was pioneered by Wöhler (1860). He suggested the plot of the applied nominal stress amplitude versus the critical number of load cycles, labeled as stress-life (S-N) curve. Basquin (1910) subsequently proposed a simple power-law relation between the lifetime and the stress amplitude for fully reversed, constant-amplitude fatigue loading. The minimum-to-maximum stress ratio and the average stress have further been shown to affect the S-N curve significantly (Gerber, 1874), though not its power-law form.

After the advent of fracture mechanics, it has generally been agreed that the fatigue lifetime should be determined from the growth rate of a fatigue crack. Paris and Erdogan (1963) proposed a power-law for the fatigue crack growth rate, called the Paris law, which expresses this rate as a power-law function of the stress amplitude. Its integration up to the critical crack length yields the fatigue lifetime. For metals, the Paris law exponent was predicted to be 4, whereas for quasi-brittle materials experiments showed that the Paris law exponent is significantly higher, specifically 10 for concrete (Bažant and Xu, 1991).

Dynamic Loads, Seismicity, and Impact

Earth quakes are critical for design in many parts of the world. In order to ensure the required life-time, a structure must be able to withstand a seismic event even towards the end of its service life after being subjected to years of environmental influences, sustained loads and cyclic loading. For dynamic and seismic analyses, inertia, the rate-dependence of strength, and dynamic material properties are of relevance. On a structural scale, the dynamic characteristics of materials can be described my eigenfrequencies, eigenmodes, and damping parameters.

In design earth quakes and impact loads are considered extreme events with different safety requirements—reduced partial safety factors. In spite of the low probability of occurrence of an extreme event, there is still the necessity to assess the remaining load carrying capacity post-event based on inspection results and non-destructive testing. For critical applications, the structural members performance during and after an extreme event, at least for a certain amount of time, are essential properties.

Summary

In the future, the concrete engineering community will require sufficiently accurate, theoretically based, and experimentally validated prediction models for time-dependent deterioration processes and the evolution of material properties, including a framework for the quantification of model uncertainties. Combined with suitable modeling concepts for structural response based on fracture mechanics and efficient sampling techniques, the determination of realistic failure probabilities and hence life-time becomes possible. This is a necessary prerequisite without which concepts such as performance-based design and assessment or life cycle cost optimization cannot be successfully applied. The design and maintenance of a durable, safe and sustainable infrastructure are dependent on a better understanding of the aging characteristics of concrete—a goal this research is dedicated to.

eISSN:
0006-5471
Lingua:
Inglese
Frequenza di pubblicazione:
4 volte all'anno
Argomenti della rivista:
Life Sciences, Ecology, other