BerkeleyDB shared cache

Hello,
I'm using BerkelyDB 4.7.25 (via the BerkeleyDB Perl module) on a FreeBSD system in a forking daemon, with a shared cache environment. Could someone, looking at the output below, tell me whether these statistics are good for the cache? You'd think that "Requested pages found in the cache (99%)" looks good, but I see no shared memory segments being used of any kind; or how much shared memory is being used, or if, even.
Thanks,
- Mark
{root} % /usr/local/bin/db_stat -m -h /var/db/smtpd/
20MB 1KB 752B Total cache size
1 Number of caches
1 Maximum number of caches
20MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
231751 Requested pages found in the cache (99%)
410 Requested pages not found in the cache
1 Pages created in the cache
410 Pages read into the cache
3430 Pages written from the cache to the backing file
0 Clean pages forced from the cache
0 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
411 Current total page count
411 Current clean page count
0 Current dirty page count
2053 Number of hash buckets used for page location
232572 Total number of times hash chains searched for a page
2 The longest hash chain searched for a page
248006 Total number of hash chain entries checked for page
0 The number of hash bucket locks that required waiting (0%)
0 The maximum number of times any hash bucket lock was waited for (0%)
0 The number of region locks that required waiting (0%)
0 The number of buffers frozen
0 The number of buffers thawed
0 The number of frozen buffers freed
483 The number of page allocations
0 The number of hash buckets examined during allocations
0 The maximum number of hash buckets examined for an allocation
0 The number of pages examined during allocations
0 The max number of pages examined for an allocation
0 Threads waited on page I/O

Hi Mark,
Even if I'm not sure I can help you with everything (especially with Perl configuration) I didn't want for your post to remain unanswered.
liarafan wrote:
Could someone, looking at the output below, tell me whether these statistics are good for the cache?Your statistics are looking like the ones every Berkeley DB user dreams on. I'm not sure how your application scenario looks like (maybe you can tell us more) and what amount of data you use for the test (numebr of records, key/data size), compared to the size of cache, but the statistics are looking great. Is the cache size corresponding to the size of a normal working data set?
liarafan wrote:
I see no shared memory segments being used of any kind; or how much shared memory is being used, or if, even.Each of the BDB subsystems within an database environment is described by one or more regions ( http://www.oracle.com/technology/documentation/berkeley-db/db/ref/env/region.html ). The regions contain all of the per-process and per-thread shared information, including mutexes, that comprise a Berkeley DB environment. For example, one of the shared memory segments will be the transactional information for the system, and one (or more) of them will be the cache. What Berkeley DB product/flags are you using?
Please let me know if this helped and if you have other questions.
Thanks,
Bogdan

Similar Messages

  • Is "Process scoped identity" the same thing as TopLink shared cache?

    Bumped into this thread on my investigation of ORM solutions:
    http://forum.hibernate.org/viewtopic.php?t=939623&highlight=toplink
    What I would like to know is whether "Process scoped identity" as Gavin King puts it, is the same as TopLink shared cache and if so, whether this could cause deadlocks?
    Quote:
        All the docs for Hibernate assume that you are working in a multi-user system,
        where process scoped identity is an absolute no-no - you would require
        synchronization on entity instances, which is guaranteed to result in deadlocks,
        since there is no possible "natural" order in which to obtain locks.

    Yes, process scoped identity is basically what TopLink's shared server session cache is offering. Having shared instances is a huge benefit for performance and is most noticeable with read-only and read-mostly data types shared across users.
    Yes, we do need to have locking to ensure that changes are safely written into the shared instance and that transactionally isolated (UnitOfWork) copies are safely made in a row consistent fashion. Although I cannot comment on Hibernate's implementation I assume that whatever object type they use in their shared (L2) cache offers the same concurrency protection to ensure that users get consistent data and that concurrent writes do not corrupt the cached object structure.
    I guess the only real difference is the object type cached. TopLink cached your business object and Hibernate caches a custom object representing the state of you object. Both need to ensure proper locking for concurrency protection but TopLink does not need to rebuild an instance for each client session unless wanted/needed.
    Doug

  • Dyld: shared cached file was build against a different libSystem.dylib?

    Hi
    i have a strange error when i open terminal or the console:
    dyld: shared cached file was build against a different libSystem.dylib, ignoring cache.
    so i know how to repair this so i did:
    sudo update_dyld_shared_cache -force
    And when i did that, i get some strange warning :
    warning, could not bind /System/Library/QuickTime/QuickTimeMPEG4.component/Contents/MacOS/QuickTimeMPEG 4 because realpath() failed on
    /System/Library/Frameworks/CoreMedia.framework/Versions/A/CoreMedia
    update_dyld_shared_cache[1255] current cache file is invalid because it contains a different set of dylibs
    update_dyld_shared_cache failed: could not resolve _CGSAcceleratorForDisplayAlias expected in /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Application Services in /System/Library/QuickTime/QuickTimeComponents.component/Contents/MacOS/QuickTim eComponents
    And this message still comes up! :
    dyld: shared cached file was build against a different libSystem.dylib, ignoring cache.
    It's realy annoying, please help?!

    We have been suffering this too. Fortunatly, finally, we figure out how to clear the share cache via Safe Boot now. Sharing with everybody:
    -Shut down your Mac.
    -Start your Mac, and keep pressing Shift key when the start music begins.
    -When Apple logo comes, the spinning ball appears, you can leave the Shift key.
    -Your certain OS open, and your caches will be cleaned.
    -Then Restart your certain OS.
    Now, congratulations, you will never see the warning message in your Terminal or Console.
    Hope this helpful.
    Love
    XPGtester

  • Using EclipseLink or JPA with Oracle Coherence as shared cache

    Hi,
    I expect to use EclipseLink with Oracle Coherence as shared cache, mainly with/for Entity Caching.
    * JPA)
    For JPA queries, is the JPA semantics fully preserved when using Oracle Coherence ?
    * Non-JPA)
    Are the non-JPA API taking also advantage from Oracle Coherence ?
    For these non-JPA queries, is the semantics fully preserved when using Oracle Coherence ?
    Regards,
    Dominique
    PS: I hope this is the right forum to ask my questions.
    Otherwise, could you tell me what is the right forum ?
    Thanks.

    Yes, JPA semantics are fully preserved when using TopLink-Grid to cache entities within Coherence.
    Yes, the native EclipseLink APIs can also take advantage of TopLink-Grid and Coherence.
    I am not sure what you mean by "For these non-JPA queries, is the semantics fully preserved when using Oracle Coherence ?" Perhaps you can provide an example of what you are looking for?
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to config a shared cache for multiple environments with C API

    How to config a shared cache for multiple environments with C API?  Just like Java edition. Chapter 2. Database Environments
    I want to open large number of databases, at least 10,000. But as the counts of databases opened increase, the db->open operation become very slow. It almost cost 2 hours to 10,000 databases.
    So I try to distribute these databases to multiple environments ( for example, 5 envs ). And in order to improve the efficient of memory use, I want to share cache between envs.

    Hi,
    It is not clear what you meaning about multiple environments. Do you mean these environments are in different directories or  in the same directory ? If you mean environments in different dirs share the same cache, it is interesting why you need that.
    If you do not use DB_PRIVATE to open the environment, the created cache will be on disk, in the environment directory, so it can be shared by multiple processes and multiple threads. Currently, the cache file is in the environment directory, and we do not support specifying a separate directory for cache only.
    Regards,
    Oracle Berkeley DB.

  • Single shared cache for Bridge CS5 and CS6

    I currently have Photoshop CS5.1 and CS6 installed on my Mac. Although I am only using CS6 for my work I have kept the previous version installed just in case any problems emerge with CS6 using Mountain Lion.
    My simple question is whether or not I have to maintain separate cache folders for both versions of Bridge or whether they can actually be shared? At present I have two separate cache folders but the CS6 version is larger for some reason. I use Lightroom 4 generally for Raw processing.

    Yes at present I have the original cache that was created by CS5.1 and then a second one that was created by CS6. These are stored by default in the User's Library but I read somewhere that performance was enahnced if they were moved to the same folder as the images, so I did that did that manually. 
    Apart from the space saving it occured to me that a shared cache would mean both versions stayed up to date but I wonder about the difference between the two different Raw processing engines and suspect that would be the biggest block against using a shared cache. I can't remember ever seeing this point addressed by Adobe so it would be good to have an authoritative answer one way or another.

  • Crystal Reprot -Could not create shared cache

    HI ALL,
    When i  try to login into crystal reprot 9
    I am getting bellow error.
    Could not create shared cache(0,0)
    when viewing a report in Crystal Reports.
    Could please provide me information

    Post to the [Crystal Reports Design|SAP Crystal Reports; forum.
    Ludek

  • Terminal shared cache different libSystem.dylib

    When Terminal.app stated, now getting following message. Is it serious?
    dyld: shared cached file was build against a different libSystem.dylib, ignoring cache
    How do I clear it - What cache and where is it?
    Snow installed from scratch, clean, fresh. Been adding other vendors.

    From http://discussions.apple.com/thread.jspa?threadID=1260029
    try this:
    *sudo updatedyld_sharedcache -force*
    BTW, Terminal queries are best handled at the Unix forum under OS X Technologies.

  • Enabling shared cache mode for SQLite connections?

    SQLite supports shared cache connections which provides more granular write locks among multiple connections allowing for greater concurrency across connections within a single application.  http://www.sqlite.org/sharedcache.html
    This could be very useful for applications that open multiple async connections to perform updates to different tables.
    Is there any way that an AIR application can get access to the shared cache connections?
    Thanks,
    Peter

    Hi Ryan,
    Iam trying to activate Delta Cache for some querys in RSRT.
    We have many application servers,so, as you have advised i have choosen Mode 4.
    And, ticked the check box delta cache.
    Apart from that do i need to do any other settings?
    Can you please aslo let me know what settings needs to be done for the below:
    Read Mode
    Persistence Mode
    Optimization Mode
    It'll be very helpful for me.
    Thanks,
    Nisha

  • How does the "je.maxMemoryPercent" setting relate to shared cache?

    Assume there are two environments which are set to use 50% and 20% of memory. Each environment is configured for shared caching. Wouldn't these settings contradict?
    Thanks,

    Hi,
    From the javadoc
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/je/EnvironmentMutableConfig.html#setCachePercent(int)
    we have:
    If setSharedCache(true) is called, setCacheSize and setCachePercent specify the total size of the shared cache, and changing these parameters will change the size of the shared cache.
    So the last call to change the cache size -- for any environment sharing the cache -- will be the one to take effect for the shared cache.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Install Arch on multiple machines with shared cache?

    Hi!
    I would like to install ArchLinux on about 8 machines (all using the same arch). I thought that the best way to do this would be a central shared pacman cache on my server, so that I don't have to download all package twice and thrice.
    The only problem I have are during the installation. Am I right, that during the installation Arch expects the packages to be mounted to /src/core/pkg instead of /var/cache/pacman/pkg. The other thing is that I don't know whats up with the /var/lib/pacman/{sync, local} folders. Where are they expected during the install process?
    Are there easier mehtods to have a ArchLinux install with local packages?
    Thanks in advance
    schneida
    Last edited by schneida (2009-12-04 11:55:51)

    schneida wrote:Are there easier mehtods to have a ArchLinux install with local packages?
    Personally I use a squid cache proxy to keep a copy of the packages that will be used across the various computers.
    The interesting sections of my /etc/squid/squid.conf file are;
    maximum_object_size 500 MB
    refresh_pattern abs.tar.gz$ 0 20% 4320 refresh-ims
    refresh_pattern db.tar.gz$ 0 20% 4320 refresh-ims
    refresh_pattern files.tar.gz$ 0 20% 4320 refresh-ims
    refresh_pattern pkg.tar.gz$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store
    The patterns will
    * Cache all pacman packages for at least a week.
    * Check the server if the database has been modified since it was last accessed.

  • Open cursors and shared cached cursors

    Hi
    In addm report i found below recommendation, before any change in parameter i want to know about those parameters, is there any thumb rule for this parameters,
    is there any drawback if i increase those parameters.
    FINDING 7: 2.1% impact (10693 seconds)
    Soft parsing of SQL statements was consuming significant database time.
    RECOMMENDATION 1: Application Analysis, 2.1% benefit (10693 seconds)
    ACTION: Investigate application logic to keep open the frequently used
    cursors. Note that cursors are closed by both cursor close calls
    and
    session disconnects.
    RECOMMENDATION 2: DB Configuration, 2.1% benefit (10693 seconds)
    ACTION: Consider increasing the maximum number of open cursors a
    session
    can have by increasing the value of parameter "open_cursors".
    ACTION: Consider increasing the session cursor cache size by
    increasing
    the value of parameter "session_cached_cursors".
    RATIONALE: The value of parameter "open_cursors" was "300" during the
    analysis period.
    RATIONALE: The value of parameter "session_cached_cursors" was "20"
    during the analysis period.
    Thanks and Regards
    Jafar

    Jaffy
    Your system suffers from soft parsing (according to ADDM), therefore:
    - Increasing the value of open_cursors has no impact on soft parsing (only up to 9.2.0.4 open_cursors had a direct impact on that for PL/SQL programs).
    - Increasing the value of session_cached_cursors might help reducing soft parsing. If it helps or not is really dependent from the application.
    ADDM is probably advising to increase open_cursors as well, because the database engine will keep cursors open even if the application closes them.
    HTH
    Chris
    PS: cursor_sharing might be helpful to reduce hard parses. It has no impact on soft parses... So, forget the hint about it.

  • Sharing/Caching (Dynamic) Images

    I find that the Flash Player does not cache dynamic images
    (even in the same session), so my question is, how would I share
    images across multiple image components such that I can build an
    asset provider of sorts to assit with caching images and other
    assets?
    I've tried a simplistic approach where in I assign the source
    of one image component to another but then the first image
    component goes blank and the image "transfers" to the second.
    Not sure if BitmapData will help with this either.
    Any help/direction is much appreciated.
    Thanks.
    Shiv.

    URL? Code?
    Nancy Gill
    Adobe Community Expert
    Author: Dreamweaver 8 e-book for the DMX Zone
    Co-Author: Dreamweaver MX: Instant Troubleshooter (August,
    2003)
    Technical Editor: Dreamweaver CS3: The Missing Manual,
    DMX 2004: The Complete Reference, DMX 2004: A Beginner's
    Guide
    Mastering Macromedia Contribute
    Technical Reviewer: Dynamic Dreamweaver MX/DMX: Advanced PHP
    Web Development
    "kidcobra" <[email protected]> wrote in
    message
    news:g2oqb0$bgq$[email protected]..
    > PHP, dynamic images from mysql dbase....diplay on local
    server, but will
    > only
    > display on web server if in the same level or folder as
    the php page.
    > Locally,
    > I have put the path to the images in the dbase in front
    of the URL of the
    > image, and all works fine.... but with that path in
    there and images not
    > in
    > same folder as page, no web server display. I know I'm
    missing the
    > obvious, but
    > need help if anyone has the simple answer . Thanks
    >

  • Dyld: shared cached file is corrupt: /var/db/dyld/dyld_shared_cache_ppc

    My Leopard is an upgrade from Tiger. I created a new Droplet and discovered that the error message in the title of this post was produced each time the Droplet was called. There was no failure at the GUI level. Weird, since I have an Intel based Mac not a PowerPC (ppc suffix in message). Also, I found that dyldshared_cacheppc did not exist in the dyld directory. The following Terminal command fixed the problem:
    updatedyld_sharedcache -force
    I hope this post saves time for others.

    Not only did you save me time, you saved my sanity. I've been on the brink of being institutionalized for weeks having tried a million other solutions to no avail. This worked magically. Can't thank you enough!!!!

  • How to create a cache for JPA Entities using an EJB

    Hello everybody! I have recently got started with JPA 2.0 (I use eclipseLink) and EJB 3.1 and have a problem to figure out how to best implement a cache for my JPA Entities using an EJB.
    In the following I try to describe my problem.. I know it is a bit verbose, but hope somebody will help me.. (I highlighted in bold the core of my problem, in case you want to first decide if you can/want help and in the case spend another couple of minutes to understand the domain)
    I have the following JPA Entities:
    @Entity Genre{
    private String name;
    @OneToMany(mappedBy = "genre", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<Novel> novels;
    @Entity
    class Novel{
    @ManyToOne(cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Genre genre;
    private String titleUnique;
    @OneToMany(mappedBy="novel", cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> editions;
    @Entity
    class NovelEdition{
    private String publisherNameUnique;
    private String year;
    @ManyToOne(optional=false, cascade={CascadeType.PERSIST, CascadeType.MERGE})
    private Novel novel;
    @ManyToOne(optional=false, cascade={CascadeType.MERGE, CascadeType.PERSIST})
    private Catalog appearsInCatalog;
    @Entity
    class Catalog{
    private String name;
    @OneToMany(mappedBy = "appearsInCatalog", cascade = {CascadeType.MERGE, CascadeType.PERSIST})
    private Collection<NovelEdition> novelsInCatalog;
    The idea is to have several Novels, belonging each to a specific Genre, for which can exist more than an edition (different publisher, year, etc). For semplicity a NovelEdition can belong to just one Catalog, being such a Catalog represented by such a text file:
    FILE 1:
    Catalog: Name Of Catalog 1
    "Title of Novel 1", "Genre1 name","Publisher1 Name", 2009
    "Title of Novel 2", "Genre1 name","Pulisher2 Name", 2010
    FILE 2:
    Catalog: Name Of Catalog 2
    "Title of Novel 1", "Genre1 name","Publisher2 Name", 2011
    "Title of Novel 2", "Genre1 name","Pulisher1 Name", 2011
    Each entity has associated a Stateless EJB that acts as a DAO, using a Transaction Scoped EntityManager. For example:
    @Stateless
    public class NovelDAO extends AbstractDAO<Novel> {
    @PersistenceContext(unitName = "XXX")
    private EntityManager em;
    protected EntityManager getEntityManager() {
    return em;
    public NovelDAO() {
    super(Novel.class);
    //NovelDAO Specific methods
    I am interested at when the catalog files are parsed and the corresponding entities are built (I usually read a whole batch of Catalogs at a time).
    Being the parsing a String-driven procedure, I don't want to repeat actions like novelDAO.getByName("Title of Novel 1") so I would like to use a centralized cache for mappings of type String-Identifier->Entity object.
    Currently I use +3 Objects+:
    1) The file parser, which does something like:
    final CatalogBuilder catalogBuilder = //JNDI Lookup
    //for each file:
    String catalogName = parseCatalogName(file);
    catalogBuilder.setCatalogName(catalogName);
    //For each novel edition
    String title= parseNovelTitle();
    String genre= parseGenre();
    catalogBuilder.addNovelEdition(title, genre, publisher, year);
    //End foreach
    catalogBuilder.build();
    2) The CatalogBuilder is a Stateful EJB which uses the Cache and gets re-initialized every time a new Catalog file is parsed and gets "removed" after a catalog is persisted.
    @Stateful
    public class CatalogBuilder {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private Cache cache;
    private Catalog catalog;
    @PostConstruct
    public void initialize() {
    catalog = new Catalog();
    catalog.setNovelsInCatalog(new ArrayList<NovelEdition>());
    public void addNovelEdition(String title, String genreStr, String publisher, String year){
    Genre genre = cache.findGenreCreateIfAbsent(genreStr);//##
    Novel novel = cache.findNovelCreateIfAbsent(title, genre);//##
    NovelEdition novEd = new NovelEdition();
    novEd.setNovel(novel);
    //novEd.set publisher year catalog
    catalog.getNovelsInCatalog().add();
    public void setCatalogName(String name) {
    catalog.setName(name);
    @Remove
    public void build(){
    em.merge(catalog);
    3) Finally, the problematic bean: Cache. For CatalogBuilder I used an EXTENDED persistence context (which I need as the Parser executes several succesive transactions) together with a Stateful EJB; but in this case I am not really sure what I need. In fact, the cache:
    Should stay in memory until the parser is finished with its job, but not longer (should not be a singleton) as the parsing is just a very particular activity which happens rarely.
    Should keep all of the entities in context, and should return managed entities form mehtods marked with ##, otherwise the attempt to persist the catalog should fail (duplicated INSERTs)..
    Should use the same persistence context as the CatalogBuilder.
    What I have now is :
    @Stateful
    public class Cache {
    @PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
    private EntityManager em;
    @EJB
    private sessionbean.GenreDAO genreDAO;
    //DAOs for other cached entities
    Map<String, Genre> genreName2Object=new TreeMap<String, Genre>();
    @PostConstruct
    public void initialize(){
    for (Genre g: genreDAO.findAll()) {
    genreName2Object.put(g.getName(), em.merge(g));
    public Genre findGenreCreateIfAbsent(String genreName){
    if (genreName2Object.containsKey(genreName){
    return genreName2Object.get(genreName);
    Genre g = new Genre();
    g.setName();
    g.setNovels(new ArrayList<Novel>());
    genreDAO.persist(t);
    genreName2Object.put(t.getIdentifier(), em.merge(t));
    return t;
    But honestly I couldn't find a solution which satisfies these 3 points at the same time. For example, using another stateful bean with an extended persistence context (PC) would work for the 1st parsed file, but I have no idea what should happen from the 2nd file on.. Indeed, for the 1st file the PC will be created and propagated from CatalogBuilder to Cache, which will then use the same PC. But after build() returns, the PC of CatalogBuilder should (I guess) be removed and re-created during the succesive parsing, although the PC of Cache should stay "alive": shouldn't in this case an exception being thrown? Another problem is what to do when the Cache bean is passivated. Currently I get the exception:
    "passivateEJB(), Exception caught ->
    java.io.IOException: java.io.IOException
    at com.sun.ejb.base.io.IOUtils.serializeObject(IOUtils.java:101)
    at com.sun.ejb.containers.util.cache.LruSessionCache.saveStateToStore(LruSessionCache.java:501)"
    Hence, I have no Idea how to implement my cache.. Can you please tell me how would you solve the problem?
    Many thanks!
    Bye

    Hi Chris,
    thanks for your reply!
    I've tried to add the following into persistence.xml (although I've read that eclipseLink uses L2 cache by default..):
    <shared-cache-mode>ALL</shared-cache-mode>
    Then I replaced the Cache bean with a stateless bean which has methods like
    Genre findGenreCreateIfAbsent(String genreName){
    Genre genre = genreDAO.findByName(genreName);
    if (genre!=null){
    return genre;
    genre = //Build new genre object
    genreDAO.persist(genre);
    return genre;
    As far as I undestood, the shared cache should automatically store the genre and avoid querying the DB multiple times for the same genre, but unfortunately this is not the case: if I use a FINE logging level, I see really a lot of SELECT queries, which I didn't see with my "home made" Cache...
    I am really confused.. :(
    Thanks again for helping + bye

Maybe you are looking for

  • The new update for iTunes is not working

    so i tried to open the imovie app but instead it brought me to the app store and showed me that i needed to update itunes.  I did the update and it restarted my computer but the i movie ico is still biring ing me to the app store and it is still tell

  • MSI ex600 sound headphones&speakers problem

    Hello all I have MSI ex600 and i have problem with my sound When speakers playing some music and i want hear it in my headphones its problem because speakers playing away So i hear music on headphones and speakes too I try find some solution (and its

  • How to delcare decimal value in selection screen

    hi all, in my selection screen, how to declare parameter  type decimal value of 2 places default value 99.00 % points will be rewarded thanks in advance

  • Accessing NetStorage via WebDAV from Windows 7

    Until last week we were running Netware 6.5 SP8. I was aware that accessing NetStorage via WebDAV (using Microsoft Web Folders) does not work on Windows 7. Now I migrated the server to OES 11 SP2 and I decided to re-investigate the issue. Currently,

  • 2 independent text blocks in 1 line

    Hi! how can i put 2 text blocks beside each other in 1 "line" with 2 independent formats(similiar like in a newspaper) on a StyledDocument? | block 1| block 2 | |1111111|2222222| |1111111|2222222| |1111111|2222222| |_______|_______| Both blocks shall