Too much time to mount Data file in Time capsule then error 8085

its taking too much time to open the folder and some times it gives this error code -8085 .. any idea ?
Regards

Use ethernet and see if you can delete the folders.. as long as you have them stored elsewhere.
The TC is not a NAS.. it is not designed as a file store.. even attempting to use it that way can lead to issues.
Read the info from our main guru.. Pondini. On why you don't do this. And ways around it.
Q3 here. http://pondini.org/TM/Time_Capsule.html

Similar Messages

  • Taking too much time incollecting in business content activation

    Hi all,
    I am collecting business content object for activation. I have selected 0fiAA_cha object,while cllecting in the activation but it is taking too much time and then it asks for source
    system authorisation and then throws error maximum run time exceded. i have selected data flow before there.
    What can be the reason for it.
    Please help..

    Hi ,
    You should also always try and have the latest BI content patch installed but I don't think this is the problem. It seems that there
    are alot of objects to collect. Under 'grouping' you can select the option 'only necessary objects', please check if you can
    use this option to  install the objects that you need from content.
    Best Regards,
    Des.

  • Taking too much time using BufferedWriter to write to a file

    Hi,
    I'm using the method extractItems() which is given below to write data to a file. This method is taking too much time to execute when the number of records in the enumeration is 10000 and above. To be precise it takes around 70 minutes. The writing pauses intermittently for 20 seconds after writing a few lines and sometimes for much more. Has somebody faced this problem before and if so what could be the problem. This is a very high priority work and it would be really helpful if someone could give me some info on this.
    Thanks in advance.
    public String extractItems() throws InternalServerException{
    try{
                   String extractFileName = getExtractFileName();
                   FileWriter fileWriter = new FileWriter(extractFileName);
                   BufferedWriter bufferWrt = new BufferedWriter(fileWriter);
                   CXBusinessClassIfc editClass = new ExploreClassImpl(className, mdlMgr );
    System.out.println("Before -1");
                   CXPropertyInfoIfc[] propInfo = editClass.getClassPropertyInfo(configName);
    System.out.println("After -1");
              PrintWriter out = new PrintWriter(bufferWrt);
    System.out.println("Before -2");
              TemplateHeaderInfo.printHeaderInfo(propInfo, out, mdlMgr);
    System.out.println("After -2");
    XDItemSet itemSet = getItemsForObjectIds(catalogEditDO.getSelectedItems());
    Enumeration allitems = itemSet.allItems();
    System.out.println("the batch size : " +itemSet.getBatchSize());
    XDForm frm = itemSet.getXDForm();
    XDFormProperty[] props = frm.getXDFormProperties();
    System.out.println("Before -3");
    bufferWrt.newLine();
    long startTime ,startTime1 ,startTime2 ,startTime3;
    startTime = System.currentTimeMillis();
    System.out.println("time here is--before-while : " +startTime);
    while(allitems.hasMoreElements()){
    String aRow = "";
    XDItem item = (XDItem)allitems.nextElement();
    for(int i =0 ; i < props.length; i++){
         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --new: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    startTime3 = System.currentTimeMillis();
    System.out.println("time here is--after-while : " +startTime3);
                   out.close();//added by rosmon to check extra time taken for extraction//
    bufferWrt.close();
    fileWriter.close();
    System.out.println("After -3");
    return extractFileName;
    catch(Exception e){
                   e.printStackTrace();
    throw new InternalServerException(e.getMessage());

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • Data extracting to BW from R3 taking too much time

    Hi,
    We have one delta data load to ODS from R3 this is taking 4-5 hours .this job runs in r3 itself for 4-5 hours even for 30-40 records.and after this ODS data updated to cube so but since in ODS itself takes too much time so delta brings 0 records in cube hence we have to update manually.
    Also as now job is running for load to ODS so can't we check records for delta in RSA3 Its giving me error saying  "error occurs during extraction ".
    can u please guide how we can make this loading faster if any index needs to be build how to proceed on that front
    Thanks
    Nilesh

    rAHUL,
    I tried with R its giving me dump with message "Resul of customer enhancemnet 19571 records"
    Erro details are -
    Short text
        Function module " " not found.
    What happened?
        The function module " " is called,
        but cannot be found in the library.
        Error in the ABAP Application Program
        The current ABAP program "SAPLRSA3" had to be terminated because
        come across a statement that unfortunately cannot be executed.
    What can you do?
        Note down which actions and inputs caused the error.
        To process the problem further, contact you SAP system
        administrator.
        Using Transaction ST22 for ABAP Dump Analysis, you can look
        at and manage termination messages, and you can also
        keep them for a long time.

  • Azure Data Sync Log Take Too Much Time to Load

    Hi,
    I've found that the Azure Sync Log inside the Portal take too much time to load if my sync group was created long time ago. i.e. for those new sync group with only a few logs, it will take around 5 seconds for loading. However, I've many sync group created
    more than 1 year, and it will take me up to 10 minutes for viewing the first page of the logs.
    It seems the Portal will download ALL the logs to my browser even I just want to view the most recent logs (maybe I just want to view the last 10 records, but it will download the whole 5000 pages?)
    It will be great if the Portal only fetch the data of the page I am currently viewing, instead of fetching all the logs.
    Is there any ways to speed up the log loading time?
    Michael Yung

    Hello,
    I suggest you post a feedback on
    Azure SQL Data Sync feedback forum.
    All of the feedback you share in the forum will be monitored and reviewed by the Microsoft engineering teams responsible for building Azure.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Query taking too much time with dates??

    hello folks,
    I am trying pull some data using the date condition and for somereason its taking too much time to return the data
       and trunc(al.activity_date) = TRUNC (SYSDATE, 'DD') - 1     --If i use this its takes too much time
      and al.activity_date >= to_date('20101123 000000', 'YYYYMMDD HH24MISS')
       and al.activity_date <= to_date('20101123 235959', 'YYYYMMDD HH24MISS') -- If i use this it returns the data in a second. why is that?
    How do i get the previous day without using the hardcoded to_date('20101123 000000', 'YYYYMMDD HH24MISS'), if i need to retrieve it faster??

    Presumably you've got an index on activity_date.
    If you apply a function like TRUNC to activity_date, you can no longer use the index.
    Post execution plans to verify.
    and al.activity_date >= TRUNC (SYSDATE, 'DD') - 1
    and al.activity_date < TRUNC (SYSDATE, 'DD')

  • Data Dictionary query takes too much time.

    Hello,
    I am using ORACLE DATABASE 11g.
    The below query is taking too much time to execute and give the output. As i have tried a few Oracle sql hints but it dint worked out.
    SELECT
    distinct B.TABLE_NAME, 'Y'
      FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
    WHERE A.INDEX_NAME = B.INDEX_NAME
       AND A.PARTITION_NAME = C.PARTITION_NAME
       AND C.STATUS = 'UNUSABLE'
        OR A.STATUS = 'UNUSABLE'
        OR B.STATUS = 'INVALID';Please guide me what to do ? to run this query in a fast pace mode ...
    thanks in advance ..

    Your query is incorrect. It will return ALL tables if A.STATUS = 'UNUSABLE' or B.STATUS = 'INVALID'. Most likely you meant:
    SELECT
    distinct B.TABLE_NAME, 'Y'
      FROM USER_IND_PARTITIONS A, USER_INDEXES B, USER_IND_SUBPARTITIONS C
    WHERE A.INDEX_NAME = B.INDEX_NAME
       AND A.PARTITION_NAME = C.PARTITION_NAME
       AND (C.STATUS = 'UNUSABLE'
        OR A.STATUS = 'UNUSABLE'
        OR B.STATUS = 'INVALID');But the above will return subpartitioned tables with invalid/unusable indexes. It will not return non-subpartitioned partitioned tables with invalid/unusable indexes/index partitions same as non-partitioned tables with invalid/unusable indexes. If you want to get any table with invalid/unusable indexes you need to outer join which will hurt performance even more. I suggest you use UNION:
    SELECT  DISTINCT TABLE_NAME,
                     'Y'
      FROM  (
              SELECT INDEX_NAME,'Y' FROM USER_INDEXES WHERE STATUS = 'INVALID'
             UNION ALL
              SELECT INDEX_NAME,'Y' FROM USER_IND_PARTITIONS WHERE STATUS = 'UNUSABLE'
             UNION ALL
              SELECT INDEX_NAME,'Y' FROM USER_IND_SUBPARTITIONS WHERE STATUS = 'UNUSABLE'
            ) A,
            USER_INDEXES B
      WHERE A.INDEX_NAME = B.INDEX_NAME
    /SY.

  • Finishing Backup takes too much time.

    Finishing backup takes too much time in my system , sometimes 30 minutes even 1 hour! and sometimes right after finishing backup(before the icon in the menu bar stops) it starts to take another backup( and usually 10MB or something small !) or the estimation is wrong .
    Sometimes I see my system that is taking backups hours and hours(just with small stops!)
    and the speed rate I of data transfer is reduced, while I set it to be on maximum speed in Time Capsule.

    And the log with Time-stamps from Console
    4/6/09 2:31:24 PM /System/Library/CoreServices/backupd[2141] Starting standard backup
    4/6/09 2:31:34 PM /System/Library/CoreServices/backupd[2141] Mounted network destination using URL: afp://[email protected]/Sina's%20Time%20Capsule
    4/6/09 2:31:34 PM /System/Library/CoreServices/backupd[2141] Backup destination mounted at path: /Volumes/Sina's Time Capsule
    4/6/09 2:31:39 PM /System/Library/CoreServices/backupd[2141] Disk image /Volumes/Sina's Time Capsule/Sina’s MacBook_0017f2347181.sparsebundle mounted at: /Volumes/Backup of Sina’s MacBook
    4/6/09 2:31:39 PM /System/Library/CoreServices/backupd[2141] Backing up to: /Volumes/Backup of Sina’s MacBook/Backups.backupdb
    4/6/09 2:39:45 PM /System/Library/CoreServices/backupd[2141] No pre-backup thinning needed: 862.6 MB requested (including padding), 205.56 GB available
    4/6/09 2:46:52 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:48:07 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:50:06 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:50:06 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 2:57:21 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 3:03:57 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:03:58 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:03:58 PM /System/Library/CoreServices/backupd[2141] Unable to rebuild path cache for source item. Partial source path:
    4/6/09 3:12:55 PM /System/Library/CoreServices/backupd[2141] Copied 29426 files (162.6 MB) from volume Macintosh Disk.
    4/6/09 3:17:36 PM /System/Library/CoreServices/backupd[2141] No pre-backup thinning needed: 678.9 MB requested (including padding), 205.56 GB available
    4/6/09 3:37:18 PM /System/Library/CoreServices/backupd[2141] Bulk setting Spotlight attributes failed.
    4/6/09 3:37:57 PM /System/Library/CoreServices/backupd[2141] Copied 2362 files (100.3 MB) from volume Macintosh Disk.
    4/6/09 3:42:00 PM /System/Library/CoreServices/backupd[2141] Starting post-backup thinning

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • PRS600 ereader taking too much time to load all the books

    Whenever I open My PRS600 from complete shutdown it takes too much time to load all the files which are present in Memory card. Is anyone else facing same issue?

    Hello,
    Recently, I have been having some serious issues when it comes to my Sony PRS-600. I am running on a Windows 7 64-bit, and an updated Reader Library 3.3
    The issue comes when transferring books to the e-reader from the library, and from the e-reader to a collection. The problem is that the software becomes intolerably slow while it's processing the command. The Reader Library software window grays out, displays (Not Working) and if clicked on it, shades to a white color and displays either "cancel operation" or "wait until program responds". If I do close the operation, it appears as if the e-reader doesn't follow the operation and still displays "Do not disconnect". Since I do not see any other way to disconnect (other than the eject option), I remove the USB plug which causes a bit more issues with the reader (such as removing all of my collections, for example!).
    But anyway, that's not the main issue here. The main issue is that the book transferring is really slow. I need to wait a couple of minutes (or even more) just for the software to process. Moving just 1 MB of data requires so much time as if it's 1 GB. Sometimes it's random and does it fast, and sometimes the application is better not to be dealt with at all while it's processing the command. If I would inspect My Computer, the simple loading of the e-reader storage icons and information would make the Windows Explorer to "crash" (e.g, close all windows and then reopen them). It just happens that in all randomness even the creation of a collection makes the software slow.
    So to recap: the reader software is slow when adding and moving books.
    I hope someone will help me resolve this annoyance.
    Thank you,
    KQ

  • [Solved] Dolphin taking too much time to respond

    Dolphin, while used by an ordinary user, takes too much time to respond, especially when firefox is running. The time it takes to initialize is also very lengthy. At times, it does not even open up and even if active, it hangs for quite sometime before being responsive again while trying to select a file or two.
    At the same time, it works rather faster when called as super user through command line using $sudo dolphin
    But then, the console displays lot of error messages as follows.
    sebinaj ~
    $ sudo dolphin
    Error: "/var/tmp/kdecache-sebinaj1Jz46I" is owned by uid 1002 instead of uid 0.
    Error: "/tmp/kde-sebinaj" is owned by uid 1002 instead of uid 0.
    sebinaj ~
    $ Error: "/tmp/ksocket-sebinaj" is owned by uid 1002 instead of uid 0.
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Unsupported operation (2)": "Invalid model"
    "/usr/bin/dolphin(3298)" Error in thread 140247744997200 : "Invalid iterator."
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/03/22/nco#title
    Error: alias comment requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#comment, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#comment
    Error: alias count requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#count, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#count
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#created
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#description
    Error: alias duration requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#duration, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#duration
    Error: alias encoding requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#encoding, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#encoding
    Error: alias role requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#role, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#role
    Error: alias url requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#url, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#url
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#version
    Error: alias bitsPerSample requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#bitsPerSample, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#bitsPerSample
    Error: alias copyright requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#copyright, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#copyright
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#date
    Error: alias dateTime requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#dateTime, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#dateTime
    Error: alias geo requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#geo, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#geo
    Error: alias height requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#height, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#height
    Error: alias width requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#width, http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#width
    Error: alias date requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#date, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#date
    Error: alias fileOwner requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#fileOwner, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#fileOwner
    Error: alias language requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#language, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#language
    Error: alias length requested by several properties: http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#length, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#length
    Error: alias publisher requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#publisher, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#publisher
    Error: alias title requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#title, http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#title
    Error: alias contributor requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#contributor, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#contributor
    Error: alias created requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#created, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#created
    Error: alias creator requested by several properties: http://www.semanticdesktop.org/ontologies/2007/03/22/nco#creator, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#creator
    Error: alias description requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#description, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#description
    Error: alias identifier requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#identifier, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier
    Error: alias lastModified requested by several properties: http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#lastModified, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#lastModified
    Error: alias version requested by several properties: http://www.semanticdesktop.org/ontologies/2007/01/19/nie#version, http://www.semanticdesktop.org/ontologies/2007/08/15/nao#version
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#fileExtension' is not defined in any rdfs ontology database.
    WARNING: field 'http://strigi.sf.net/ontologies/0.9#debugParseError' is not defined in any rdfs ontology database.
    /usr/lib/strigi/strigiea_ics.so
    /usr/lib/strigi/strigiea_jpeg.so
    /usr/lib/strigi/strigiea_vcf.so
    /usr/lib/strigi/strigila_cpp.so
    /usr/lib/strigi/strigila_deb.so
    /usr/lib/strigi/strigila_diff.so
    /usr/lib/strigi/strigila_mobi.so
    /usr/lib/strigi/strigila_namespaceharvester.so
    /usr/lib/strigi/strigila_po.so
    /usr/lib/strigi/strigila_txt.so
    /usr/lib/strigi/strigila_xpm.so
    /usr/lib/strigi/strigita_au.so
    /usr/lib/strigi/strigita_audible.so
    /usr/lib/strigi/strigita_avi.so
    /usr/lib/strigi/strigita_dds.so
    /usr/lib/strigi/strigita_dvi.so
    /usr/lib/strigi/strigita_font.so
    /usr/lib/strigi/strigita_gif.so
    /usr/lib/strigi/strigita_ico.so
    /usr/lib/strigi/strigita_mp4.so
    /usr/lib/strigi/strigita_pcx.so
    /usr/lib/strigi/strigita_rgb.so
    /usr/lib/strigi/strigita_sid.so
    /usr/lib/strigi/strigita_ts.so
    /usr/lib/strigi/strigita_wav.so
    /usr/lib/strigi/strigita_xbm.so
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#usesNamespace' is not defined in any rdfs ontology database.
    WARNING: field 'translation.total' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.untranslated' is not defined in any rdfs ontology database.
    WARNING: field 'translation.obsolete' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_file_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.first_modify_file' is not defined in any rdfs ontology database.
    WARNING: field 'content.format_subtype' is not defined in any rdfs ontology database.
    WARNING: field 'content.generator' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.hunk_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.insert_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.modify_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'diff.stats.delete_line_count' is not defined in any rdfs ontology database.
    WARNING: field 'translation.fuzzy' is not defined in any rdfs ontology database.
    WARNING: field 'translation.last_translator' is not defined in any rdfs ontology database.
    WARNING: field 'translation.translation_date' is not defined in any rdfs ontology database.
    WARNING: field 'translation.source_date' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#colorCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#formatSubtype' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#bitsPerSample' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#audioSampleDataType' is not defined in any rdfs ontology database.
    WARNING: field 'content.mime_type' is not defined in any rdfs ontology database.
    WARNING: field 'audio.title' is not defined in any rdfs ontology database.
    WARNING: field 'audio.artist' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audio.narrator' is not defined in any rdfs ontology database.
    WARNING: field 'media.codec' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_id' is not defined in any rdfs ontology database.
    WARNING: field 'todo.audible.user_alias' is not defined in any rdfs ontology database.
    WARNING: field 'audio.duration' is not defined in any rdfs ontology database.
    WARNING: field 'content.description' is not defined in any rdfs ontology database.
    WARNING: field 'content.copyright' is not defined in any rdfs ontology database.
    WARNING: field 'content.keyword' is not defined in any rdfs ontology database.
    WARNING: field 'content.creation_time' is not defined in any rdfs ontology database.
    WARNING: field 'content.maintainer' is not defined in any rdfs ontology database.
    WARNING: field 'content.ID' is not defined in any rdfs ontology database.
    WARNING: field 'audio.channel_count' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nfo#colorDepth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#colorSpace' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#compressionAlgorithm' is not defined in any rdfs ontology database.
    WARNING: field 'font.family' is not defined in any rdfs ontology database.
    WARNING: field 'font.weight' is not defined in any rdfs ontology database.
    WARNING: field 'font.slant' is not defined in any rdfs ontology database.
    WARNING: field 'font.width' is not defined in any rdfs ontology database.
    WARNING: field 'font.spacing' is not defined in any rdfs ontology database.
    WARNING: field 'font.foundry' is not defined in any rdfs ontology database.
    WARNING: field 'content.version' is not defined in any rdfs ontology database.
    WARNING: field 'content.genre' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_discNumber' is not defined in any rdfs ontology database.
    WARNING: field 'content.author' is not defined in any rdfs ontology database.
    WARNING: field 'content.comment' is not defined in any rdfs ontology database.
    WARNING: field 'audio.album' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_audio.albumartist' is not defined in any rdfs ontology database.
    WARNING: field 'content.links' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchaser' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_content.purchasedate' is not defined in any rdfs ontology database.
    WARNING: field 'media.duration' is not defined in any rdfs ontology database.
    WARNING: field 'TODO_video.duration' is not defined in any rdfs ontology database.
    WARNING: field 'av.audio_codec' is not defined in any rdfs ontology database.
    WARNING: field 'av.video_codec' is not defined in any rdfs ontology database.
    WARNING: field 'content.thumbnail' is not defined in any rdfs ontology database.
    WARNING: field 'user.rating' is not defined in any rdfs ontology database.
    WARNING: field 'image.width' is not defined in any rdfs ontology database.
    WARNING: field 'image.height' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_rate' is not defined in any rdfs ontology database.
    WARNING: field 'media.sample_format' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#artist' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#albumTrackCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#musicAlbum' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#genre' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#composer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#trackNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#setNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#performer' is not defined in any rdfs ontology database.
    WARNING: field 'http://www.semanticdesktop.org/ontologies/nmm#internationalStandardRecordingCode' is not defined in any rdfs ontology database.
    WARNING: field 'Product Id' is not defined in any rdfs ontology database.
    WARNING: field 'Events' is not defined in any rdfs ontology database.
    WARNING: field 'Journals' is not defined in any rdfs ontology database.
    WARNING: field 'Todos' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Completed' is not defined in any rdfs ontology database.
    WARNING: field 'Todos Overdue' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#ccdWidth' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#focusDistance' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#targetQuality' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#givenName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#familyName' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#emailAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homepageContactURL' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentComment' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#cellPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#faxPhoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#phoneNumber' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#homePostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#workPostalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#postalAddress' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificPrefix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#honorificSuffix' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#subject' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#title' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#author' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#description' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#copyright' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#isContentEncrypted' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#contentKeyword' is not defined in any rdfs ontology database.
    WARNING: field 'http://freedesktop.org/standards/xesam/1.0/core#paragraphCount' is not defined in any rdfs ontology database.
    WARNING: field 'http://rdf.openmolecules.net/0.9#moleculeCount' is not defined in any rdfs ontology database.
    kDebugStream called after destruction (from void KDirWatchPrivate::removeEntry(KDirWatch*, KDirWatchPrivate::Entry*, KDirWatchPrivate::Entry*) file /home/phil/kdemod/core/kdelibs/src/kdelibs-4.3.3/kio/kio/kdirwatch.cpp line 901)
    Cancelled INotify (fd 9, 1) for "/home/sebinaj/.local/share"
    ^C
    I am using KDEmod + Arch
    Last edited by absolutevoid (2009-11-05 17:23:59)

    There's a large thread around about this Dolphin problem.
    Disabling Nepomuk in System Settings has proved to be the
    cure in many cases.
    Deej

  • Page leads to proxy error sometimes or taking too much time to load

    Hi All,
    APEX4.0
    Web server: Apache 1.3.9 (Oracle 9iAS 10.0.1.2.2)
    I am getting  the below error at first time when I try to open a page or the page takes 3 to 5 mins to load. From second time on wards, It takes 5 to 8 sec to open as normal. I debugged the page and checked the log. Logs and execution time are looking normal.  why the page takes too much time to load at first time or it leads to proxy error?? Is anybody got same experience before??
    Proxy Error
    The proxy server received an invalid response from an upstream server.
    The proxy server could not handle the request GET/pls/apex/f.
    Reason : Document contains no data
    Please guide me to find out and  resolve this issue....
    Thanks in Advance
    Lakshmi

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • PDPageDrawContentsToWindowEx takes too much time

    We are using an Acrobat plugin that renders the PDF file to a bitmap in memory.
    We are using Acrobat Professional X. But same problems also appear on Acrobat 9.
    We have received several problematic PDF files from our customers that is causing the call that renders the image -
    PDPageDrawContentsToWindowEx() to take unreasonably long time.
    My target resolutions is 600 dpi but I couldn't wait for the call to return, after more than 6 minutes I kill the process.
    The same PDFs render in Acrobat with slight delay (flickering and repainting) but in reasonable time.
    I have written for this problem on previous occasions (Aug 5 2010). Since then further problematic samples
    show that is linked somehow with transparency being present, but not on all PDFs with transparency.
    We have a fast computer so the problem is somewhere in the PDF analysis.
    Trying to optimize the file didn't help.
    Checking with Preflight for PDF syntax issues also didn't find anything.

    I'll have to check the headers, but I KNOW that we exposed it to plugins in A9 – it was necessary since we pulled the DrawToWindow call on the Mac (carbon vs. cocoa).
    What you are asking the SDK to do is going to be painful on large complex images.  Drawing into an HDC/Window adds SIGNIFICANT overhead to our rendering process, since we have to do all the work in our own "bit buffer" and then copy that buffer into the OS's provided HDC.  OUCH!  This is why  DrawToMemory is better.
    Additionally, if you have files with complex transparency AND you want OverprintPreview, that's also going to be VERY complex rendering pipeline – made WORSE by the need to end up in an HDC.   Forgetting separations for the moment, consider that we have to convert everything to CMYK (since you can only compute OP in CMYK), blend colors, then convert all of that to RGB.   And that's assuming SIMPLE transparency.  If you have multiple blending groups, soft masks, etc. then you just raised the bar even more!
    How long does it take Acrobat to open up the PDF and render it completely to screen?   For separations, how long does it take to do a "Flatten Transparency" operation?
    UpdateRect won't help because of the OP and then transparency flattening
    From: Adobe Forums <[email protected]<mailto:[email protected]>>
    Reply-To: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
    Date: Mon, 5 Dec 2011 06:45:46 -0800
    To: Leonard Rosenthol <[email protected]<mailto:[email protected]>>
    Subject: PDPageDrawContentsToWindowEx takes too much time
    Re: PDPageDrawContentsToWindowEx takes too much time
    created by Nikolay Tasev<http://forums.adobe.com/people/Nikolay+Tasev> in Acrobat SDK - View the full discussion<http://forums.adobe.com/message/4064198#4064198

  • Parsing the query takes too much time.

    Hello.
    I hitting the bug in в Oracle XE (parsing some query takes too much time).
    A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
    Please, raise a bug for Oracle XE.
    Steps to reproduce the issue:
    1. Extract files from testcase_dump.zip and testcase_sql.zip
    2. Under username SYSTEM execute script schema.sql
    3. Import data from file TESTCASE14.DMP
    4. Under username SYSTEM execute script testcase14.sql
    SQL text can be downloaded from http://files.mail.ru/DJTTE3
    Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
    Regards,
    Viacheslav.

    Bug number? Version fix applies to?
    Relevant Note that describes the problem and points out bug/patch availability?
    With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1.

  • Import taking too much time

    Hi all
    I'm quite new to database administration.my problem is that i'm trying to import dump file but one of the table taking too much time to import .
    Description::
    1 Export taken from source database which is in oracle 8i character set is WE8ISO8859P1
    2 I am taking import in 10 g with character set utf 8 and national character set is also same.
    3 dump file is about 1.5 gb.
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow
    please help me thanks in advance.......

    Hello,
    4 I got error like value is too large for column so in target db which is in utf 8 i convert all coloumn from varchar2 to char.
    5 while taking a import some table get import very fast bt at perticular table it get very slow For the point *4* it's typically due to the CHARACTER SET conversion.
    You export data in WE8ISO8859P1 and import in UTF8. In WE8ISO8859P1 characters are encoded in *1 Byte* so *1 CHAR = 1 BYTE*. In UTF8 (Unicode) characters are encoded in up to *4 Bytes* so *1 CHAR > 1 BYTE*.
    For this reason you'll have to modify the length of your CHAR or VARCHAR2 Columns, or add the CHAR option (by default it's BYTE) in the column datatype definition of the Tables. For instance:
    VARCHAR2(100 CHAR)The NLS_LENGTH_SEMANTICS parameter may be used also but it's not very well managed by export/Import.
    So, I suggest you this:
    1. set NLS_LENGTH_SEMANTICS=CHAR on your target database and restart the database.
    2. Create from a script all your Tables (empty) on the target database (without the indexes and constraints).
    3. Import the datas to the Tables.
    4. Import the Indexes and constraints.You'll have more information on the following Note of MOS:
    Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) [ID 144808.1]For the point *5* it may be due to the conversion problem you are experiencing, it may also due to some special datatype like LONG.
    Else, I have a question, why do you choose UTF8 on your Target database and not AL32UTF8 ?
    AL32UTF8 is recommended for Unicode uses.
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for