BDB Native Version 5.0.21 - asynchronous write at the master node

Hi There,
As part of performance tuning, we think of introducing asynchronous write capabilities at the master node in replication code that uses BDB native edition (11g).
Are there any known issues with the asynchronous write at the master node? We'd like to confirm with Oracle before we promote to production.
For asynchronous write at the master node we have configured a TaskExecutor with the following configuration:
<bean id="MasterAsynchronousWriteTaskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="3"/>
<property name="maxPoolSize" value="10"/>
<property name="daemon" value="true"/>
<property name="queueCapacity" value="200000"/>
<property name="threadNamePrefix" value="Master_Entity_Writer_Thread"/>
<property name="threadGroupName" value="BDBMasterWriterThreads"/>
</bean>
Local test showed no issues. Please let us know at the EARLIEST convenience if there are any changes required to corePoolSize, “maxPoolSize” and “queueCapacity” values as a result of asynchronous write.
To summarize, 2 questions:
1) Are there any known issues with the asynchronous write at the master node for BDB Native, version 5.0.21?
2) If there are no issues, are any changes required to corePoolSize, “maxPoolSize” and “queueCapacity” values as a result of asynchronous write, and based on the configuration above?
Thank you!

Hello,
If you have not already, please take a look at the documentation
on "Database and log file archival" at:
http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/transapp_archival.html
Are you periodically creating backups of your database files?
These snapshots are either a standard backup, which creates a
consistent picture of the databases as of a single instant in
time; or an on-line backup (also known as a hot backup), which
creates a consistent picture of the databases as of an
unspecified instant during the period of time when the
snapshot was made. After backing up the database files you
should periodically archive the log files being created in the
environment. And I believe the question here is how often
the periodic archive should take place to establish the
best protocol for catastrophic recovery in the case of a
failure like physical hardware being destroyed, etc.
As the documentation describes, it is often helpful to think
of database archival in terms of full and incremental filesystem
backups. A snapshot is a full backup, whereas the periodic
archival of the current log files is an incremental backup.
For example, it might be reasonable to take a full snapshot
of a database environment weekly or monthly, and archive
additional log files daily. Using both the snapshot and the
log files, a catastrophic crash at any time can be recovered
to the time of the most recent log archival; a time long after
the original snapshot.
What other details can you provide about how how much activity
there is on your system with regards to log file creation
and how often a full backup is being taken, etc.
Thank,
Sandra

Similar Messages

  • How do I "save as" or equivalent function so that I can save an altered version of a template .pdf seperate of the master copy?

    Hello, my company has recently purchased several tablets for our employees out in the field so they can fill out forms on the go without using paper.  For example:  We have a timesheet .pdf that would need to be filled out, then saved separately from the original once a week.  Is this a function that can be achieved?

    Hi,
    You can do this by following steps:
    1. Save the timesheet.pdf in the device.
    2. Long press it and select Overflow icon on top bar and select “Duplicate”.
    3. New copy of the pdf will be made “Timesheet(1).pdf”. You can edit this and original “timesheet.pdf” will be retained.
    You can follow these steps again and keep a track for different weeks.
    Hope this will helps. Please write back to us in case of any queries.
    Thanks,
    Adobe Reader Team

  • 11g Express Edition Wishlist - native version in x86_64

    I think the new release in both native versions of 32 & 64 bits sounds logical.

    Hi C.
    I've compared two incomparable situations in two similar PCs where the same app is developed in a 64 bits version of Linux and 10gR2 and runs in a 32 bits WinXP XE database. Isn't comparable because the Linux box assign almost 2 GB instance.
    I read much about the relative advantages of 64 bits over 32, specially that you remark about the memory limit (but I think all XE users are expecting the limit be extended to 2 o 4 GB), but in terms of this concrete example, I can say the 64 bits DB performance (with the memory size overhead) doubles the XE.
    I think also that build 32 bits software today can't be a desicion other than put a limitation for a product that you don't have to pay.
    Regards.

  • Hello, I have a problem when I connect to the computer ipad 2 in itunes and click on the "Programs", itunes crashes and writes simply an error. when connecting the iphone does not. everywhere is version 7.0. Itunes is also the latest version. thank you

    Hello, I have a problem when I connect to the computer ipad 2 in itunes and click on the "Programs", itunes crashes and writes simply an error. when connecting the iphone does not. everywhere is version 7.0. Itunes is also the latest version. thank you

    There are multiple reasons that lead to issue. You should read the troubleshooting guide to get the right solution to solve the issue: iPhone, iPad, or iPod touch not recognized in iTunes for Windows - Apple Support

  • Multithreaded write support in Master

    1. We have an design where in we write synchronously at the master level. Since it is
    bottleneck to usesynchronous write in a multithreaded environment. We have removed the
    synchronous write,
    which lead to too many DB_LOCK_NOTGRANTED Exception at master.
    Note: write transaction timeout = 10secs ( which is too high)
    Below PseudoCode will help in understanding above point.
    HttpController
    requestHandler()
         synchronous
         adapter.write(entity);
         //the adapter writes data to master inside a transaction with timeout value
    10secs.
    The questions are
    a) Does BDB support multithreaded write at master?
    b) If yes, is it something configurable?
    2. In our new design, we have created an asynchronous threads(3 threads with queuecapacity of
    20k) at the master level. 8-13 Replica(In production will have asynchronous threads(3 threads
    with queuecapacity of 200k). What is the optimum number of threads that we can have at the
    master level and queueCapacity?
    For eg.
    HttpController
    requestHandler()
         aynchronousthreadexecutor.execute(entity, adapter);
         //asynchronous thread with 3 threads, 20k queueCapacity     
    }

    Corrigendum: Added information about BDB Version and few more helping information for this context.
    Hi There,
    We have the following questions w.r.t Multi threaded write at master.
    1. We have an design where in we write synchronously at the master level. Since it is bottleneck to
    use synchronous write in a multithreaded environment we have removed the synchronous write,
    which lead to too many DB_LOCK_NOTGRANTED Exception at master.
    Note: write transaction timeout = 10secs ( which is too high)
    Below PseudoCode will help in understanding above point.
    HttpController
    requestHandler()
         synchronous
         adapter.write(entity);
         //the adapter writes data to master inside a transaction with timeout value 10secs.
    The questions are
    a) Does BDB support multithreaded write at master?
    b) If yes, is it something configurable?
    2. In our new design, we have created asynchronous threads(3 number with queuecapacity of 20k) at the
    master level. Replica will have asynchronous threads(3 number with queuecapacity of 200k). In Production Replica count could
    go upto 13.
    a) What is the optimum number of threads that we can have at the master level
    b) What would be ideal value qeueCapacity for the Executor (we use Spring Thread Executor)?
    For eg.
    HttpController
    requestHandler()
         aynchronousthreadexecutor.execute(entity, adapter);
         //asynchronous thread with 3 threads, 20k queueCapacity     
    Thanks.

  • I have the latest Mac Air 13" and I have a Seagate 500MB external hard disk I cant copy or Cut any files on the hard disk and I cant even delete any files. I have the settings as Read and Write in the get info tab. Please help

    I have the latest Mac Air 13" and I have a Seagate 500MB external hard disk I cant copy or Cut any files on the hard disk and I cant even delete any files. I have the settings as Read and Write in the get info tab. Please helpand also note that my hard drive was formatted on a Windows 7 Laptop.

    thats the problem, its in MSDos (Fat) or NTFS for Windows.
    Options.....
    1. offload all that data on the HD onto your PC, THEN format the HD in EXFAT for use on BOTH PC and Mac for read/write.....then reload all (or as you need) that data back onto the HD
    2. get another HD, and format it for Mac OSX extended journaled.
    FAT32 (File Allocation Table)
    Read/Write FAT32 from both native Windows and native Mac OS X.
    Maximum file size: 4GB.
    Maximum volume size: 2TB
    You can use this format if you share the drive between Mac OS X and Windows computers and have no files larger than 4GB.
    NTFS (Windows NT File System)
    Read/Write NTFS from native Windows.
    Read only NTFS from native Mac OS X
    To Read/Write/Format NTFS from Mac OS X, here are some alternatives:
    For Mac OS X 10.4 or later (32 or 64-bit), install Paragon (approx $20) (Best Choice for Lion)
    Native NTFS support can be enabled in Snow Leopard and Lion, but is not advisable, due to instability.
    AirPort Extreme (802.11n) and Time Capsule do not support NTFS
    Maximum file size: 16 TB
    Maximum volume size: 256TB
    You can use this format if you routinely share a drive with multiple Windows systems.
    HFS+     ((((MAC FORMAT)))  (Hierarchical File System, a.k.a. Mac OS Extended (Journaled) Don't use case-sensitive)
    Read/Write HFS+ from native Mac OS X
    Required for Time Machine or Carbon Copy Cloner or SuperDuper! backups of Mac internal hard drive.
    To Read HFS+ (but not Write) from Windows, Install HFSExplorer
    Maximum file size: 8EiB
    Maximum volume size: 8EiB
    You can use this format if you only use the drive with Mac OS X, or use it for backups of your Mac OS X internal drive, or if you only share it with one Windows PC (with MacDrive installed on the PC)
    EXFAT (FAT64)    ------Can read/write from both PC and Mac
    Supported in Mac OS X only in 10.6.5 or later.
    Not all Windows versions support exFAT. 
    exFAT (Extended File Allocation Table)
    AirPort Extreme (802.11n) and Time Capsule do not support exFAT
    Maximum file size: 16 EiB
    Maximum volume size: 64 ZiB
    You can use this format if it is supported by all computers with which you intend to share the drive.  See "disadvantages" for details.

  • Credential Roaming failed to write to the Active Directory. Error code 5 (Access is denied.)

    Hi All,
    I could see following error event in all client computers , Could you please some one help me on this ?
    Log Name:      Application
    Source:
    Microsoft-Windows-CertificateServicesClient-CredentialRoaming
    Event ID:      1005
    Level:         Error
    Description: Certificate Services Client: Credential Roaming failed to  write to the Active Directory. Error code 5 (Access is denied.)
    Regards, Srinivasu.Muchcherla

    If you are not using certificates and Credential Roaming for clients then simply ignore the error message.
    If you are using certificates then you are getting access denied message when Credential Roaming is trying to write to your AD. More details about Credential Roaming here: http://blogs.technet.com/b/askds/archive/2009/01/06/certs-on-wheels-understanding-credential-roaming.aspx
    http://blogs.technet.com/b/instan/archive/2009/05/26/considerations-for-implementing-credential-roaming.aspx
    This is probably related to the fact that your schema version not 44 or higher: https://social.technet.microsoft.com/Forums/windowsserver/en-US/5b3a6e61-68c4-47d3-ae79-8296cb3be315/certificateservicesclientcredentialroaming-errors?forum=winserverGP 
    Active Directory
    ObjectVersion
    Windows 2000
    13
    Windows 2003
    30
    Windows 2003 R2
    31
    Windows 2008
    44
    Windows 2008 R2
    47
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Can multiple threads write to the database?

    I am a little confused from the statement in the documentation: "Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time."
    1. Can multiple threads write to the "Simple Data Store"?
    2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    #include "stdafx.h"
    #include <stdio.h>
    #include <windows.h>
    #include <db.h>
    static DB *db = NULL;
    static DB_ENV *dbEnv = NULL;
    DWORD WINAPI th_write(LPVOID lpParam)
    DBT key, data;
    char key_buff[32], data_buff[32];
    DWORD i;
    printf("thread(%s) - start\n", lpParam);
    for (i = 0; i < 200; ++i)
    memset(&key, 0, sizeof(key));
    memset(&data, 0, sizeof(data));
    sprintf(key_buff, "K:%s", lpParam);
    sprintf(data_buff, "D:%s:%8d", lpParam, i);
    key.data = key_buff;
    key.size = strlen(key_buff);
    data.data = data_buff;
    data.size = strlen(data_buff);
    db->put(db, NULL, &key, &data, 0);
    Sleep(5);
    printf("thread(%s) - End\n", lpParam);
    return 0;
    int main()
    db_env_create(&dbEnv, 0);
    dbEnv->open(dbEnv, NULL, DB_CREATE | DB_INIT_MPOOL | DB_THREAD, 0);
    db_create(&db, dbEnv, 0);
    db->open(db, NULL, "test.db", NULL, DB_BTREE, DB_CREATE, 0);
    CreateThread(NULL, 0, th_write, "A", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "B", 0, 0);
    CreateThread(NULL, 0, th_write, "C", 0, 0);
    th_write("C");
    Sleep(2000);
    }

    Here some clarification about BDB Lock and Multi threads behavior
    Question 1. Can multiple threads write to the "Simple Data Store"?
    Answer 1.
    Please Refer to http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    A Data Store (DS) set up
    (so not using an environment or using one, but without any of the DB_INIT_LOCK, DB_INIT_TXN, DB_INIT_LOG environment regions related flags specified
    each corresponding to the appropriate subsystem, locking, transaction, logging)
    will not guard against data corruption due to accessing the same database page and overwriting the same records, corrupting the internal structure of the database etc.
    (note that in the case of the Btree, Hash and Recno access methods we lock at the database page level, only for the Queue access method we lock at record level)
    So,
    if You want to have multiple threads in the application writing concurrently or in parallel to the same database You need to use locking (and properly handle any potential deadlocks),
    otherwise You risk corrupting the data itself or the database (its internal structure).
    Of course , If You serialize at the application level the access to the database, so that no more one threads writes to the database at a time, there will be no need for locking.
    But obviously this is likely not the behavior You want.
    Hence, You need to use either a CDS (Concurrent Data Store) or TDS (Transactional Data Store) set up.
    See the table comparing the various set ups, here: http://docs.oracle.com/cd/E17076_02/html/programmer_reference/intro_products.html
    Berkeley DB Data Store
    The Berkeley DB Data Store product is an embeddable, high-performance data store. This product supports multiple concurrent threads of control, including multiple processes and multiple threads of control within a process. However, Berkeley DB Data Store does not support locking, and hence does not guarantee correct behavior if more than one thread of control is updating the database at a time. The Berkeley DB Data Store is intended for use in read-only applications or applications which can guarantee no more than one thread of control updates the database at a time.
    Berkeley DB Concurrent Data Store
    The Berkeley DB Concurrent Data Store product adds multiple-reader, single writer capabilities to the Berkeley DB Data Store product. This product provides built-in concurrency and locking feature. Berkeley DB Concurrent Data Store is intended for applications that need support for concurrent updates to a database that is largely used for reading.
    Berkeley DB Transactional Data Store
    The Berkeley DB Transactional Data Store product adds support for transactions and database recovery. Berkeley DB Transactional Data Store is intended for applications that require industrial-strength database services, including excellent performance under high-concurrency workloads of read and write operations, the ability to commit or roll back multiple changes to the database at a single instant, and the guarantee that in the event of a catastrophic system or hardware failure, all committed database changes are preserved.
    So, clearly DS is not a solution for this case, where multiple threads need to write simultaneously to the database.
    CDS (Concurrent Data Store) provides locking features, but only for multiple-reader/single-writer scenarios. You use CDS when you specify the DB_INIT_CDB flag when opening the BDB environment: http://docs.oracle.com/cd/E17076_02/html/api_reference/C/envopen.html#envopen_DB_INIT_CDB
    TDS (Transactional Data Store) provides locking features, adds complete ACID support for transactions and offers recoverability guarantees. You use TDS when you specify the DB_INIT_TXN and DB_INIT_LOG flags when opening the environment. To have locking support, you would need to also specify the DB_INIT_LOCK flag.
    Now, since the requirement is to have multiple writers (multi-threaded writes to the database),
    then TDS would be the way to go (CDS is useful only in single-writer scenarios, when there are no needs for recoverability).
    To Summarize
    The best way to have an understanding of what set up is needed, it is to answer the following questions:
    - What is the data access scenario? Is it multiple writer threads? Will the writers access the database simultaneously?
    - Are recoverability/data durability, atomicity of operations and data isolation important for the application? http://docs.oracle.com/cd/E17076_02/html/programmer_reference/transapp_why.html
    If the answers are yes, then TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    Question 2. Considering the sample code below which writes to the DB using 5 threads - is there a possibility of data loss?
    Answer 2.
    Definitely yes, You can see data loss and/or data corruption.
    You can check the behavior of your testcase in the following way
    1. Run your testcase
    2.After the program exits
    run db_verify to verify the database (db_verify -o test.db).
    You will likely see db_verify complaining, unless the thread scheduler on Windows weirdly starts each thread one after the other,
    IOW no two or ore threads write to the database at the same time -- kind of serializing the writes
    Question 3. If the code will cause data loss, will adding DB_INIT_LOCK and/or DB_INIT_TXN in DBENV->open make any difference?
    Answer 3.
    In Your case the TDS should be used, and the environment should be opened like this:
    dbEnv->open(dbEnv, ENV_HOME, DB_CREATE | DB_INIT_MPOOL | DB_INIT_LOCK | DB_INIT_TXN | DB_INIT_LOG | DB_RECOVER | DB_THREAD, 0);
    (where ENV_HOME is the filesystem directory where the BDB environment will be created)
    doing this You have proper deadlock handling in place and proper transaction usage
    so
    You are protected against potential data corruption/data loss.
    see http://docs.oracle.com/cd/E17076_02/html/gsg_txn/C/BerkeleyDB-Core-C-Txn.pdf
    Multi-threaded and Multi-process Applications
    DB is designed to support multi-threaded and multi-process applications, but their usage
    means you must pay careful attention to issues of concurrency. Transactions help your
    application's concurrency by providing various levels of isolation for your threads of control. In
    addition, DB provides mechanisms that allow you to detect and respond to deadlocks.
    Isolation means that database modifications made by one transaction will not normally be
    seen by readers from another transaction until the first commits its changes. Different threads
    use different transaction handles, so this mechanism is normally used to provide isolation
    between database operations performed by different threads.
    Note that DB supports different isolation levels. For example, you can configure your
    application to see uncommitted reads, which means that one transaction can see data that
    has been modified but not yet committed by another transaction. Doing this might mean
    your transaction reads data "dirtied" by another transaction, but which subsequently might
    change before that other transaction commits its changes. On the other hand, lowering your
    isolation requirements means that your application can experience improved throughput due
    to reduced lock contention.
    For more information on concurrency, on managing isolation levels, and on deadlock
    detection, see Concurrency (page 32).

  • Unable to refresh the schema of FIM MA.Getting an error in Event viewer ""the current version of database is not compatible with the one expected by Forefront Identity Manager service. The current version of database is : 1116. The expected version is :1"

    Hi,
    We have installed FIM MA with an account that have all the sufficient rights.It got created successfully and worked for Full Import and Full Sync. But, due to some version incompatabilities, we have installed a patch.PFB link for the patch.
    http://support.microsoft.com/en-us/kb/2969673/en-us
    Now, we are trying to refresh the schema of FIM MA. While doing that we are facing an error "Failed to connect to database". The user account with which we are connecting has read and write permissions on DB.In the event viewer some errors are
    logged like  "the current version of database is not compatible with the one expected by Forefront Identity Manager service. The current version of database is : 1116. The expected version is :1122" with event ID 3. PFB images for more detailed
    view.
    Please advice how to fix the issue.
    Thanks
    Prasanthi.

    Hello,
    seems to me that you maybe only updated the syncengine but not portal/webservice.
    I had that error once after an recovery from scratch and forgot one of the hotfixes to apply to all services.
    -Peter
    Peter Stapf - ExpertCircle GmbH - My blog:
    JustIDM.wordpress.com

  • The Microsoft Access database engine cannot open or write to the file \\fileserver\db\access.mdb

    Hi,
    I have Windows Server 2012 with SQL 2012 Standard SP1. I am using linked server, and Access Database Engine 2010 Redistributable to access my database file made in Microsoft Access (.mdb) from network file server.
    EXEC master.dbo.sp_addlinkedserver @server = N'MyLinkedServer', @srvproduct=N'MyLinkedServer', @provider=N'Microsoft.ACE.OLEDB.12.0', @datasrc=N'\\myfileserver.mydomain.com\files\mydatabase.mdb'
    My SQL service is running with domain service account MYDOMAIN\SQL1$ , i have added Full control for file share and NTFS permission on my file server folder (C:\Files).
    When I open (as domain admin with UAC elevated permissions) on my DB server SQL Management studio, I can browse tables and everything works.
    The problem is, if I open SQL management studio (as domain admin with UAC elevated permissions) on my File server or any other computer, when trying to browse my linked server i got error:
    TITLE: Microsoft SQL Server Management Studio
    Failed to retrieve data for this request. (Microsoft.SqlServer.Management.Sdk.Sfc)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "Synesis_3PRO2013". (Microsoft SQL Server, Error: 7303)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.00.3000&EvtSrc=MSSQLServer&EvtID=7303&LinkId=20476
    When I try to place simlpe Select SQL query I got error:
    OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "MyLinkedServer" returned message "The Microsoft Access database engine cannot open or write to the file '\\myfileserver.mydomain.com\files\mydatabase.mdb'. It is already opened exclusively by another
    user, or you need permission to view and write its data.".
    I do not have any other program using my access database, and user has full control. I am trying to use in security mode - "For a login not defined in the list above, connections will be made without using a security context", i have also tried all four options.
    I am confused becouse it works from SQL server but from any SQL client domain member computer/server it does not work.
    I have same problem in another environment where I have Windows Server 2008 R2 and SQL 2008 R2 SP2.
    Please help.
    -- Hrvoje Kusulja

    NTFS must be fine since it works from same server using same accounts.
    As I understand, adding my access file to Access trusted location could be a problem. I have tried now to add my access database file location to trusted locations for user which is my SQL service user (Windows Service - AD managed service account MYDOMAIN\SQL1$)
    and my test user which I use to connect to sql server as a client from sql management studio. (Account is Domain Admins and have full permissions on SQL server also)
    I have added this .reg:
    Windows Registry Editor Version 5.00
    [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Access\Security]
    [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Access\Security\Trusted Documents]
    "LastPurgeTime"=dword:01592874
    "DisablePromptOpenNetworkTrustedDocuments"=dword:00000000
    [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Access\Security\Trusted Locations]
    "AllowNetworkLocations"=dword:00000001
    [HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Access\Security\Trusted Locations\Location0]
    "Description"="My file server"
    "AllowSubFolders"=dword:00000001
    "Path"="\\\\myfileserver.mydomain.com\\files\\"
    I have done this for SQL service account user and my personal test account as I said. I have tried to logoff and restart sql service and all servers also.
    The same problem still persists.
    Anyway, thank you for giving me a hint.

  • Unable to write to library iPhoto library Check that you have permission to write to the library directory

    I think my iPhoto memory is full. I cannot get in to delete photos.  When I open it I get the Unable to write to library iPhoto libraryCheck that you have permission to write to the library directory" and i can't do anything else. I tried alt/cmd first aid and it didn't work.
    I want to get in and move pics over to a USB stick drive but cannot get past the Unable to write to library..message.

    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In early versions of Library Manager it's the File -> Rebuild command. In later versions it's under the Library menu.)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.  
    Regards
    TD

  • This project must be converted from version (macintosh 64) 12.0.x293. The original file will be unchanged

    Hi there,
    I'm making my first steps in After Effects.
    I bought Classroom in a book After Effects cc.
    The first step i have to take in lesson 1 is "Open and play the lesson01.mov sample movie to see what you will create in this lesson.
    When i dubbel click this file.. this message pops up in AE (This project must be converted from version (macintosh 64) 12.0.x293. The original file will be unchanged)
    The file will not play, and in the composition panel only a test image appears.
    Why is this and how can i fix it?
    I allready finished Lesson 1 but i still hope someone can explain me this message.
    I hope its not a stupid question but it is particularly difficult for me because English is not my native language and all the lesson material is in English.
    Kind regards
    Kay

    The warning simply means that the file was created in an older version of AE. The other issue is that you did not put your footage in the correct locations and it is missing. Use File --> Replace Footage to point to the proper storage locations.
    Mylenium

  • Synchronous and asynchronous IO on the same file descriptor

    Hi does any body knows about problems in combining synchronous and asynchronous IO on the same file descriptor? It appears that I may miss SIGIOs in that case.

    I don't think there is any real easy way to do an insert. If writing to a file, I would use a FileWriter, a FileOutputStream, or a RandomAccessFile class. I don't know of any class that will automatically do an insert so you might have to write your own method that does this. This is pretty easy: use RandomAccessFile might be the best for most situations for doing this (ie, if you are inserting many times at different locations in the file). Just copy all the bytes from the point in the file you want to insert to the end of the file, and then write your bit, and then rewrite all the read bytes to the file again. Depending on how many times you are writing to the file and how big the file is, you may want to keep the file in memory at all times because writing to disk is extremely slowwww. If the file is huge, and you are inserting seldomly, then do what I said above. If the point of insertion is closer to the start of the file, then it might be a good idea to do the above but instead of copying the bytes from the point of insertion to the end of the file, copy the bytes from the start of the file to the point of insertion.

  • Write off the asset

    Hi,
    Please can someone advice me how do we write off the asset.
    Regards,
    Parul

    hi
    You can call up the asset history sheet as a totals list, or as a detail list showing individual assets. The detail list shows the capitalization date for each asset, and, if applicable, the deactivation date and any transactions in the year being reported.
    There is a special history sheet version defined for showing special reserves for special depreciation. This version shows the
    ·        Initial balance
    ·        Allocation to special reserves
    ·        Write-off (depreciation) of special reserves
    ·        Write-off of special reserves due to asset retirements
    ·        Write-off of special reserves due to transfers
    ·        Closing balance
    Procedure
           1.      Call up the report for creating the asset history sheet (Info system ® Report selection ® Balance sheet explanations).
           2.      Enter the start parameters for the report. Make the following particularly important entries
    –         Company code for the report
    –         Report date
    –         Sort version that you want to use
    –         Asset history sheet version that you want to use
    –         Use the “current book value” indicator to specify if you wish to show planned annual depreciation, or the depreciation already posted.
           3.      If you have not posted the actual retirement of low value assets, you can use the asset history sheet to simulate their retirement (in order to minimize the transactions necessary for their retirement). In this case, enter the asset class for low value assets.
    You also need to specify a time period for the retirement simulation. Please note that the simulation time period must begin on the same date every year so that LVAs can be shown continuously and correctly in the history sheet. This fixed start date should lie in the distant past (for example 1900). Using the same start date each year ensures that LVAs, for which retirement has already been simulated, do not appear in the history sheet with their APC in subsequent years. In addition, the end date for the simulation time period has to be in a fiscal year that is still open.
    The same procedure applies for intangible assets.
    Graphic: Simulation Time Period
           4.      Limit the report as needed. If needed, make additional entries for lists created using batch input.
    for furthur notes
    http://help.sap.com/search/search_overview.jsp
    http://help.sap.com/erp2005_ehp_03/helpdata/EN/4f/71e286448011d189f00000e81ddfac/frameset.htm
    nagesh

  • Replica info - filtered vs. read/write of the root partition

    Hello all,
    According to Craig's setup guide, the BM server should be in the its own partition of which it is the master replica and it should contain a read/write replica of the root partition.
    We currently have this setup but the one issue we are having is that our content filter is using LDAP to monitor eDir authentication and it seems to be grabbing some workstation authentications as opposed to the user authentications.
    One suggestion to resolve this issue is to use a filtered replica which only sees users and user groups. Is this an option with our BM servers or should I be looking at using a different server for LDAP authentication and put a filtered replica on that one.
    Any thoughts are greatly appreciated.
    Steve D.

    BorderManager needs to read license objects when it launches, filtering
    objects from NDS, access rules from NDS and its own configuration from
    NDS. It also needs to read NMAS-related information from NDS.
    I have found that the most efficient way to a) get BMgr to read its
    information and b) fix filtering issues is to have the BMgr server in
    its own OU. In the past, there was also a Site-Site VPN dependency on
    reading a root replica, but that was fixed sometime ago. (VPN may
    launch faster if the BM server has a root replica, but it doesn't have
    to have it).
    BM wants to read licenses initially from the root of the replica ring,
    so it helps if the BM server is the master of the replica ring holding
    the licenses. This is not a requirement, but it makes BM launch faster
    usually, and it especially important in branch offices with a site-site
    VPN. BM read filters from the NBMRuleContainer, which is almost always
    in the same ou as the server. It is easier to fix filtering issues if
    you can simply delete them all and remigrate them into NDS without
    having to worry about filters from some other BM server being in the
    same container. These are the main reasons I like to have BM in its
    own partition and the master of that replica ring.
    It may help to have a replica of the security container on the server
    as well, for nmas-related VPN logins, but I'm not sure on that. If you
    are running NMAS RADIUS on the same server, you need to have replicas
    of the user partitions also on the server. And with NMAS-related
    logins for VPN, you really want all the clients and all the servers
    with user replicas up to the latest version of NMAS.
    Access rules are normally applied to the server object, but if they are
    applied to ou's above BM, it may help to have replicas holding those
    OU's on the server, but it's not required. (BM will have to read the
    OU's from some other server if it can't get them from itself though).
    Offhand, those are the NDS-related requirements I can think of for BM.
    I would be putting my efforts into fixing the LDAP calls that the
    application is using so that it doesn't look at workstation objects
    rather than try to filter those objects out. However, perhaps you
    could alter your NDS design and put all the workstation objects into
    one or more OU's that the LDAP app doesn't look at?
    Craig Johnson
    Novell Support Connection SysOp
    *** For a current patch list, tips, handy files and books on
    BorderManager, go to http://www.craigjconsulting.com ***

Maybe you are looking for

  • How do i connect 7.1 speakers to my macbook pro

    Hi, so i have some 7.1 creative gigaworks s750 speakers http://uk.creative.com/corporate/pressroom/releases/detail.asp?ref=11537 I used to have them connected to a soundcard which slotted into my old laptop about 2 years ago but havent used them sinc

  • Problem in displaying a field in webdynpro using Adaptive RFC model

    HI, I created a webdynpro application using adaptive RFC model and i have a problem in displaying one of the output fields. When i execute the function module it is giving the exact value for my output field in this case telephone number (of type STE

  • Confirming web gate installation in OAM 10.1.4.3

    I have installed webgate "Oracle_Access_Manager10_1_4_2_5_Win32_ISAPI_WebGate.exe" (here IIS is the web server),in OAM 10.1.4.3.In the "Access system configuration, i have created a host identifier,created an instance of Access gate then installed th

  • Why is my text jumping?!??!

    Hi there, I often get this problem. I'm working on a DV NTSC project and often when I have moving text I get this bouncing letter problem. You can download here [3.5MB] http://www.box.net/shared/static/o1ccf7xafa.mov and see what happens with the let

  • Is the Enhanced USB Port Replicator Really Hot-Pluggable?

    Folks, I have the T61 and am just getting to know the very nice Enhanced USB Port Replicator.  We have a wireless KB/mouse, video and external hard drive, as well as the ethernet cable, all plugged in. SITUATION "HOT PLUGGABLE":  The somewhat sparse