Search Results

1 - 2 of 2 items :

  • "efficiency" x
  • Theoretical Computer Sciences x
Clear All
The Logical Sustainability Theory for pension systems: the discrete-time model in a stochastic framework under variable mortality


The aim of this work is to provide the logical sustainability model for defined contribution pension systems (see [1], [2]) in the discrete framework under stochastic financial rate of the pension system fund and stochastic productivity of the active participants. In addition, the model is developed in the assumption of variable mortality tables.

Under these assumptions, the evolution equations of the fundamental state variables, the pension liability and the fund, are provided. In this very general discrete framework, the necessary and sufficient condition of the pension system sustainability, and all the other basic results of the logical sustainability theory, are proved.

In addition, in this work new results on the efficiency of the rule for the stabilization over time of the level of the unfunded pension liability with respect to wages, level that is defined as β indicator, are also proved.

Open access
Pointed versus singular Boltzmann samplers: a comparative analysis


Since the last two decades huge systems (such as giant graphs, big data structures, . . . ) have played a central role in computer science, and with the technology improvements, those large objects are now massively used in practice. In order to handle them we need to analyse some typical properties of models of large objects. One way to study typical behaviours consists in generating random objects to get some experimental results on their properties. A new technique has been introduced ten years ago: the Boltzmann sampling. It has been presented by Duchon et al, and is based on automatic interpretation in terms of samplers of the specification of the combinatorial objects under study.

One of the core problem in Boltzmann sampling lies in the distribution of the object sizes, and the choice of some parameters in order to get the more appropriate size distribution. From this choice depends the efficiency of the sampling. Moreover some additional ideas allows to improve the efficiency, one of them is based on some anticipated rejections, the other one on the combinatorial differentiation of the specification. Anticipated rejection consists during the recursive building of a random object to kill the process as soon as we are sure to exceed the maximum target size, rather than waiting until the natural end of the process. In the original paper, while both approaches have been presented, and used on the same kind of structures, the methods are not compared. We propose in this paper a detailed comparison of both approaches, in order to understand precisely which method is the more efficient.

Open access