This list is too long to be synchronized with SharePoint WorkSpace

Hello,
I Have a List with 30500 items in Sharepoint Site Collections.When I synchronise this Lists using on SharePoint Workspace I get this message " This list is too long to be synchronized with SharePoint WorkSpace".
Someone Can help me.
Best regards,
Fidele

SharePoint Workspace has a maximum limit on lists of 30,000 items.  The only way to sync the list you are trying to sync is reduce the number of items in the list.  You can see the limit documented here:
https://support.office.microsoft.com/en-us/article/Synchronize-SharePoint-content-with-SharePoint-Workspace-752f639f-8532-4923-888a-284cfc79337e?CorrelationId=fb3297d1-5572-4da7-b97b-81c42308baa2&ui=en-US&rs=en-US&ad=US#__toc263166752
Paul Stork SharePoint Server MVP
Principal Architect: Blue Chip Consulting Group
Blog: http://dontpapanic.com/blog
Twitter: Follow @pstork
Please remember to mark your question as "answered" if this solves your problem.

Similar Messages

  • Single user mode: Argument List is too long, PLEASE HELP

    Hey guys,
    I recently posted a topic asking if any1 knew how to delete all png files on the desktop,
    Nt many people replied,
    But anyway, I did some research and now I know how to do it,
    but in single user mode (cmd + S),
    When I typed in rm *.png
    While I ws on the desktop in single user mode,
    It came up saying Argument list too long,
    Does any1 know how to fix this,
    Plz reply ASAP,
    & if possible addme on Skype mark.davidson19, it wld b extremely helpful

    If there are too many files expanded via the *.png wildcard, then the argument list length maximum can be exceeded.  The last time I checked, Mac OS X had a 256K line length limitation. (AIX was 1M, Linux 128K, Solaris 1M, Windows 8K (Cygwin env) - your mileage may vary with each operating system release).
    If you have too many files to expand on the command line, then you can delete them in batches.  There are several ways to do this
    cd ~/Desktop
    rm [a-m]*.png
    rm [n-z]*.png
    rm [A-M]*.png
    rm [N-Z]*.png
    Or finer increments.
    You could use something like
    find ~/Desktop  '*.png' -print0 -depth 1 | xargs -0 rm
    And you can also use
    find ~/Desktop '*.png' -depth 1 -delete
    This being a Unix environment, there are most likely a dozen additional ways to delete all the .png files and avoid command line length limits.
    As MrHoffman says, using rm and wildcards is a very dangerous thing to do unless you really REALLY know what you are doing.  If not, I strongly suggest having a recent full backup handy.  I actually suggest a backup regardless of how good you are in the Unix environment (I have multiple via different backup utilities preformed on a very regular basis;  years of experience has taught me you cannot be too careful with your data - it is worth far more than the cost of backup equipment).

  • Can't delete unsent messages, name list is too long

    This morning, I tried to send an email with a bcc list over 200 (a standard mailing list I use all the time). Unfortunately, the server wouldn't let me send it because of a single bad address. Now what I have in my Mail program is the unsent message on the screen, with the warning, "Cannot send message using the server mail.xxx.edu:xxx. The server "mail.xxx.edu" did not recognize the following recipients:" And then it lists all 200 names.
    The problem is that the buttons that normally appear at this time (Edit Message, Try with Selected Server) are all the way off the bottom of the screen, so I can't cancel the message. All I can do is minimize it; it seems like the message is stuck there permanently.
    The obvious solutions have failed. I can't scroll the list of names downward. I can't push the top of the message past the top of the screen. I can't quit the program and restart, because the message just comes right back. I can't zoom the screen out in order to see the buttons. There are no active cancel buttons at the top of the message. Help, thank you!
    - Erin

    Hello,
    I just got the same problem. The list is blocking any access to the mail I want to send. I can at least minimize the window of that mail, allowing me to read and send other mails.
    But this is really, really bugging me. I need to send a mail to the alumni of my university program, so I've composed a long mail with a lot of information. Now I can't even copy it to another mail because the message is there, blocking.

  • IH01 Structure List taking too long to expand

    Dear all,
    I have this problem where by when we enter a particular Functional Location with lots of networks assigned in IH01. drop down takes too long to expand for each network level.
    Could this be a performance issue? It could be but im not 100% sure. Before I attemp the following note, has anyone had this problem before and other alternatives?
    This is the note which may solve the problem.
    Note 825541 - Performance problems in functional location

    Hi philipx  ,
    For the performance issue you describe in IH01 , the first step would be to update the database
    statistics for the tables   EQUI ,   EQUZ ,  ILOA and EQKT if you havent already.
    Also , you could check the note  50610
    Thanks -
    Enda.

  • Time_out Dump on this query take too long time

    hi experts,
    in my report a query taking too long time
    pl. provide performance tips or suggestions
    select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart    
           mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr    
           mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt    
           mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl    
           mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme    
           mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart    
           mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln    
           mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr    
           mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein    
           mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew    
           mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr    
           mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng    
           mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
           mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers    
           mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT    
        into table itab                                      
           from mkpf as mkpf                                 
            inner join mseg as mseg                          
                    on mkpf~MBLNR = mseg~mblnr               
                   and mkpf~mjahr = mseg~mjahr

    no the original query is, i use where clouse with conditions.
    select mkpf~mblnr  mkpf~mjahr  mkpf~usnam  mkpf~vgart
           mkpf~xabln  mkpf~xblnr  mkpf~zshift mkpf~frbnr
           mkpf~bktxt  mkpf~bldat  mkpf~budat  mkpf~cpudt
           mkpf~cputm  mseg~anln1  mseg~anln2  mseg~aplzl
           mseg~aufnr  mseg~aufpl  mseg~bpmng  mseg~bprme
           mseg~bstme  mseg~bstmg  mseg~bukrs  mseg~bwart
           mseg~bwtar  mseg~charg  mseg~dmbtr  mseg~ebeln
           mseg~ebelp  mseg~erfme  mseg~erfmg  mseg~exbwr
           mseg~exvkw  mseg~grund  mseg~kdauf  mseg~kdein
           mseg~kdpos  mseg~kostl  mseg~kunnr  mseg~kzbew
           mseg~kzvbr  mseg~kzzug  mseg~lgort  mseg~lifnr
           mseg~matnr  mseg~meins  mseg~menge  mseg~lsmng
           mseg~nplnr  mseg~ps_psp_pnr  mseg~rsnum  mseg~rspos
           mseg~shkzg  mseg~sobkz  mseg~vkwrt  mseg~waers
           mseg~werks  mseg~xauto  mseg~zeile  mseg~SGTXT
        into table itab
           from mkpf as mkpf
            inner join mseg as mseg
                    on mkpf~MBLNR = mseg~mblnr
                   and mkpf~mjahr = mseg~mjahr
        WHERE mkpf~budat IN budat
          AND mkpf~usnam IN usnam
          AND mkpf~vgart IN vgart
          AND mkpf~xblnr IN xblnr
          AND mkpf~zshift IN p_shift
          AND mseg~bwart IN bwart
          AND mseg~matnr IN matnr
          AND mseg~werks IN werks
          AND mseg~lgort IN lgort
          AND mseg~charg IN charg
          AND mseg~sobkz IN sobkz
          AND mseg~lifnr IN lifnr
          AND mseg~kunnr IN kunnr.

  • ORA-09100 specified length too long for its datatype with Usage Tracking.

    Hello Everyone,
    I'm getting an (ORA-09100 specified length too long for its datatype) (a sample error is provided below) when viewing the "Long-Running Queries" from the default Usage Tracking Dashboard. I've isolated the problem to the logical column "Logical SQL" corresponding to the physical column "QUERY_TEXT" in the table S_NQ_ACCT. Everything else is working correctly. The logical column "Logical SQL" is configured as a VARCHAR of length 1024 and the physical column "QUERY_TEXT" is configured as a VARCHAR2 of length 1024 bytes in an Oracle 11g database. Both are the default configurations and were not changed.
    In the the table S_NQ_ACCT we do have record entries in the field "QUERY_TEXT" that are of length 1024 characters. I've tried various configuration such as increasing the the number of bytes or removing any special character but without any sucess. Currently, my only possible workaround is reducing the "Query_Text" data entries to roughly 700 characters. This makes the error go away. Additional point my character set to WE8ISO8859P15.
    - Any suggestions?
    - Has anyone else ever had this problem?
    - Is this potentially an issue with the ODBC drive? If so, why would ODBC not truncate the field length?
    - What is the maximum length supported by BI, ODBC?
    Thanks in advance for everyones help.
    Regards,
    FBELL
    *******************************Error Message**************************************************
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 17001] Oracle Error code: 910, message: ORA-00910: specified length too long for its datatype at OCI call OCIStmtExecute: select distinct T38187.QUERY_TEXT as c1 from S_NQ_ACCT T38187 order by c1. [nQSError: 17011] SQL statement execution failed. (HY000)
    SQL Issued: SELECT Topic."Logical SQL" saw_0 FROM "Usage Tracking" ORDER BY saw_0
    *******************************************************************************************

    I beleieve I have found the issue for at least one report.
    We have views in our production environment that call materialized views on another database via db link. They are generated nightly to reduce load for day-old reporting purposes on the Production server.
    I have found that the report in question uses a view with PRODUCT_DESCRIPTION. In the remote database, this is a VARCHAR2(1995 Bytes) column. However, when we create a view in our Production environment that simply calls this materialized view, it moves the length to VARCHAR2(4000).
    The oddest thing is that the longest string stored in the MV for that column is 71 characters long.
    I may be missing something here.... But the view that Discoverer created on the APPS side also has a column length for the PRODUCT_DESCRIPTION column of VARCHAR2(4000) and running the report manually returns results less than that - is this a possible bug?

  • This resultset takes too long

    I'm getting about 15k rows from an Access 2003 database using the jdbc odbc driver but when I iterate over my resultset, it's taking about 40 seconds (although spikes to almost 2 minutes in some cases). I'm just wondering if this is a reasonable number to expect for a resultset of this size, and if so, is there any way to optimize this?
    My query is a very simple 'select field1, field2...from table1' type of query, so I don't imagine it could be made any more efficient. Here's the code I'm using to get my local disconnected version of the resultset:
    while (rs.next()) {
        item = new HashMap<String, Object>();
        for (int i = 1; i <= metadata.getColumnCount(); i++) {
            item.put(metadata.getColumnName(i), rs.getObject(i));
        data.add(item);
    } I've also tried forgoing the resultset and using a CachedRowSet, but that seems to take the same amount of time. I've also tried setting the fetch size in both cases, but the improvement was insignificant.
    Any help is appreciated. Thanks.

    Hey thanks for all of the quick responses. Here's some more info to respond to your questions.
    Each row is 5 columns. The item variable is declared just outside of that loop -- I didn't think it mattered much either way. data is a list of maps: ArrayList<Map<String, Object>>. No, I haven't tried setting the initial capacity.
    In terms of timing, I'm still convinced it's getting the objects from the resultset that's the big slowdown. The query itself runs in about a second. I modified the following line to see how long the rs.next loop would take, and it cut the time down to just under six seconds:
    item.put(metadata.getColumnName(i), "test"); I realize actually getting the object from the result set is probably the most costly operation here, but it shows that creating the other objects isn't the big bottleneck. And like I said, I was having a similar speed issue when using the CachedRowSet.
    Any thoughts on how to make this part happen faster? Or am I missing something else here? Thanks again.

  • Stopping of javathreads took 39564.694 ms  - This is much too long

    Hi,
    We have a cluster of many jrockit jvms.
    We need throughput and low gc-pausetimes.
    Every once in a while one of these machines shows exceptionally long gc-times.
    While looking at the log-files we find entries like :
    [Tue Dec 23 22:36:40 2008][1230068200511] GC reason: GC trigger reached, cause: Heap too full
    [Tue Dec 23 22:37:17 2008][1230068237423] Stopping of javathreads took 36909.749 ms
    [Mon Dec 29 16:41:18 2008][1230565278575] GC reason: GC trigger reached, cause: Heap too full
    [Mon Dec 29 16:41:58 2008][1230565318141] Stopping of javathreads took 39564.694 ms
    Generally these times are well below one second and sometimes they go a bit over a second.
    JAVA_OPTS="-server -Xms800M -Xmx4096M -XXcompressedRefs:false -Djava.net.preferIPv4Stack=true -XXsetGC:singleconcon -Xverboselog:/var/log/javavm.log -Xverbose:memory,memdbg,compaction,opt,gc -XverboseDecorations=timestamp,millis -Xgcreport -Xstrictfp -XXexitOnOutOfMemory -XXdumpFullState -XXstaticCompaction -XXcompactRatio:10 "
    we can reproduce this issue on 27.3.1, 27.4.0, 27.5.0 and 27.6.0
    [Mon Dec 29 16:41:18 2008][1230565278575] GC reason: GC trigger reached, cause: Heap too full
    [Mon Dec 29 16:41:58 2008][1230565318141] Stopping of javathreads took 39564.694 ms
    [Mon Dec 29 16:41:58 2008][1230565318141] old collection 233002 started
    [Mon Dec 29 16:41:58 2008][1230565318141] Alloc Queue size before GC: 653056, tlas: 13, oldest: 0
    [Mon Dec 29 16:41:58 2008][1230565318141] Compacting 12 heap parts at index 72 (type internal) (exceptional false)
    [Mon Dec 29 16:41:58 2008][1230565318141] OC 233002: 12 parts (max 128), index 72. Type internal, (exceptional false)
    [Mon Dec 29 16:41:58 2008][1230565318141] Area start: 0x2add266ef500, end: 0x2add2f37a080
    [Mon Dec 29 16:41:58 2008][1230565318141] Starting initial marking phase (OC1).
    [Mon Dec 29 16:41:58 2008][1230565318194] Restarting of javathreads took 17.228 ms
    [Mon Dec 29 16:41:58 2008][1230565318195] Starting concurrent marking phase (OC2).
    [Mon Dec 29 16:41:58 2008][1230565318594] Adding 3 final handles from dying thread 18538 'process reaper'.
    [Mon Dec 29 16:42:00 2008][1230565320020] Starting precleaning phase (OC3).
    [Mon Dec 29 16:42:00 2008][1230565320608] Stopping of javathreads took 488.759 ms
    [Mon Dec 29 16:42:00 2008][1230565320608] Starting final marking phase (OC4).
    [Mon Dec 29 16:42:00 2008][1230565320689] Removing 20 permanent work packets from pool, now 1498 packets
    [Mon Dec 29 16:42:00 2008][1230565320689] total concurrent mark time: 2547.660 ms
    [Mon Dec 29 16:42:00 2008][1230565320689] ending marking phase
    [Mon Dec 29 16:42:00 2008][1230565320689] Requesting to run parallel sweep since Alloc Queue is non-empty.
    [Mon Dec 29 16:42:00 2008][1230565320689] Will use parallel sweep instead of concurrent sweep due to request.
    [Mon Dec 29 16:42:00 2008][1230565320689] starting parallel sweeping phase
    [Mon Dec 29 16:42:01 2008][1230565321031] Updated 815069 references in 34219 nonmoved and 586187 moved objects
    [Mon Dec 29 16:42:01 2008][1230565321038] total sweep time: 348.937 ms
    [Mon Dec 29 16:42:01 2008][1230565321038] ending sweeping phase
    [Mon Dec 29 16:42:01 2008][1230565321038] Alloc Queue size after GC: 0, tlas: 0, oldest: 0
    [Mon Dec 29 16:42:01 2008][1230565321038] Average compact time ratio: 0.256837
    [Mon Dec 29 16:42:01 2008][1230565321038] Compaction pause: 153.938 (target 387.731), update ref pause: 194.594 (target 387.731)
    [Mon Dec 29 16:42:01 2008][1230565321038] Updated 975843 refs: 815069 inside compaction area, and 160774 outside (limit: 2390308).
    [Mon Dec 29 16:42:01 2008][1230565321038] Compaction ended at index 83, object end address 0x2add2f376980.
    [Mon Dec 29 16:42:01 2008][1230565321038] Summary: 233002;72;83;12;1;0;153.938;387.731;194.594;387.731;975843;2390308
    [Mon Dec 29 16:42:01 2008][1230565321038] gc-trigger is 21.644 %
    [Mon Dec 29 16:42:01 2008][1230565321038] 1204176.144-1204179.042: GC 1237312K->768208K (1535092K), sum of pauses 466.296 ms
    [Mon Dec 29 16:42:01 2008][1230565321038] Page faults before GC: 2, page faults after GC: 2, pages in heap: 383773
    [Mon Dec 29 16:42:01 2008][1230565321039] (OC) Pending finalizers 0->260
    [Mon Dec 29 16:42:01 2008][1230565321042] Restarting of javathreads took 3.487 ms
    (from 27.5.0)
    any ideas ?
    tuning jvm parameters ?
    thanks in advance.
    cheers Martin & Johannes

    Hi
    The questions asked above is a good place to start, especially what OS you are running. The suspend mechanism is a little bit different between Linux/Solaris and Windows.
    Basically, when a thread is stopped, it is sent a signal (using standard signalling in Linux/Solaris). Sometimes, signals get blocked in these OS:es. We have sometimes seen very long latencies due to this. One reason could be if an application is reading or writing to a stale or slow NFS drive. During the operation, signalling is blocked which could lead to these problems. If you use NFS, try verifying that the NFS shares are working properly.
    Another problem could be with something called roll forwarding (this would be a JRockit problem). However, you should be able to see this in the logs when running with Xverbose:memdbg as you are doing. Try searching for "roll forwarding" in the log. If you find anything, try adding the java option "-XXrollforwardretrylimit:-1" (If you find anything I can explain what this does ;))
    The next step for me would be to take a JRA recording with Latency data using JRockit Mission Control. This is a very powerful tool, well worth looking at for anyone working with bigger Java projects. A JRA recording is basically a recording of what is going on in the JVM. It has very low overhead, so you can run long recordings in production without problems. Try making a recording that captures at least one of the long pauses (use the latest version of JRockit as well, as this will give the most information). Latency data is a part of JRA recordings that measure everything that causes latencies. This can be file or network IO, synchronization or anything. Using such a recording, we can see if anything out of the ordinary is done during the pause. You can read more about it here: http://edocs.bea.com/jrockit/tools/jmcpdfs/mc3/mcjra3.pdf (chapter 2 and 16).
    Kind Regards
    /Mattis, JRockit Sustaining Engineering

  • Burning; song list judged too long

    All of a sudden a song list smaller than 80 minutes is considered too big for my 700 MB, 80 minutes blank cds. I burned today with the same cd's and the same iTunes version 7.1 without a glitch. After consenting in distributing the list over multiple cds - what else can you do - just nothing happens.
    Anybody any idea? Will be appreciated

    I have gone through five pages of topics, and I note that a number of people have the same problem; some of them experienced burners like myself.
    It is unlikely, that the quality of the blank cd is the problem - I have been using Maxwell 700 MB, 80 min, with success -; we all know how to burn, so the preferences may be assumed to have been set correctly.
    We all use iTunes 7.1.
    It may be that the new - most recent -Security update which I installed yesterday is the culprit.

  • Formulas in this table took too long to calculate so we cannot show you the results

    Hello, I've been getting this message when I load the responses.
    I do have calculations in the responses form, but they are relatively simple 'sum', a few 'if' statements and so on.
    Also there are only 500 lines in the current response form, so I would not expect this level of calculation to prevent form loading.
    Any advice would be much appreciated.
    BR, Pete 

    I think I have found a work around to my own problem and thpught I'd post in in case it either helps someone (or hopefully Adobe might fix the bug or advise me some other detail I had missed).
    I made a duplicate copy of my form and responses and deleted formulae one-by-one (a painful process as each time it was taking several minute to respond to each deletion and refresh the screen).
    The problem formula seemed to be a '=sum(D:G)' function that summed 4 columns. When I deleted that specific function the form suddenly started responding normally again.
    I've now replaced the formula with a simple addition statement: '=D+E+F+G' and all seems well.

  • HT201210 I have done all of the things on this list, to include editing the hosts file with no the same error each time (3194).  I am trying to restore my iphone 4 ios 6 to get it unlocked.  Diagnostics show no issue communicating with apple. Help.

    I am trying to restore my iphone 4 ios 6 to acheive unlock.  hit restore and it unloads software, confirms with apple then displays the 3194 error.  I have turned off my virus scanner, unplugged all usb except for iphone, edited the hosts file and run diagnostics which showes communication with apple is fine.  What do I need to do to get my ipone to restore?

    There is nothing wrong with the OS update.
    Delete ALL your email accounts.
    Restart Playbook
    Put the accounts back and ensure they are all set with PUSH ON.  Manual (push off) will burn battery.
    Similarly delete your wifi connections and add back when required. 
    Turn off wifi is not connected to wifi. 
    Any "hunting for connection" in email or wifi will burn up battery.

  • CreateOUIProcess(): -1 : argument list too long

    Hi,
    I tried to install 8i on Mandrake 6.1.
    When I call "./runInstaller" I get :
    Initializing Java Virtual Machine from /usr/local/jre/bin/jre.
    Please wait ...
    Error in CreateOUIProcess(): -1 :
    Die Argumentliste ist zu lang
    (meaning: argument list is too long)
    Any suggestions ?
    TIA,
    Joerg
    null

    I get the same message with Mandrake 6.1.
    But I had no problem installing Oracle 8.1.5 std under Mandrake
    6.0.
    My guess is that Mandrake made some small change that broke the
    installation script.
    Joerg Fehlmann (guest) wrote:
    : Hi,
    : I tried to install 8i on Mandrake 6.1.
    : When I call "./runInstaller" I get :
    : Initializing Java Virtual Machine from /usr/local/jre/bin/jre.
    : Please wait ...
    : Error in CreateOUIProcess(): -1 :
    : Die Argumentliste ist zu lang
    : (meaning: argument list is too long)
    : Any suggestions ?
    : TIA,
    : Joerg
    null

  • Keyword list too long

    since iphoto version 8.0.2 up to the current version 9.1.5 I have one problem: my keyword list is too long. so clicking on "search" and then on "keyword" opens a list of four collumns and way too much rows for my display. so all keywords beginning with "a" until "k" are not visible and subsequently not choosable for search.
    how can I search the first part of my keywords from "a" to "k" when the list is too long to be completely displayed?
    PS: in the first iphoto-versions I bypassed the problem by using "keyword-manager" (bullstorm.se) but since the newest version, it is no longer compatible...

    You can't, unfortunately.
    IPhoto menu -> Provide iPhoto Feedback
    Of course, you can still search using Smart Albums.
    Regards
    TD

  • ICal stopped because of too long reminder text

    I have copied a text into iCal as reminder, but apperently this text looks too long and therefore every time I start up iCal on my Mac it kind of crashes.
    I can see my calendar but not add or change anything.
    Does anyone have the same experience and can anyone tell me how to solve this issue.
    iCal still works both on my iPad and iPhone without any issues, but on my Mac is crashes instantly when I start it up.

    I solved the issue myself by removing 'reminders'  via my iPhone, update iCloud, and voila iCal works again on my Mac :-)

  • RPCPRRU0 takes too long

    Dear Experts,
    Report RPCPRRU0 takes too long to complete ? We have already tuned Memory , Oracle and other parameters only this job takes too long to complete.
    Please suggest
    Thanks,
    Yoga

    @Arjun - After running statistics on the table, it was much better. The Report was running for 15hrs and now its been reduced to 2.30hrs.
    @Augusto - Thanks for the information. As you said, Yes, We are also running the payroll for every week . The Job actually runs dedicated BG job server and sometimes we tried running on CI and DB server, still same result.
    Could you please share your experience on what are the optimization you done on the report ?
    Thanks in Advance,
    Yoga

Maybe you are looking for

  • LV7.1 Strange behavior with Automatic Error Handling occuring when it shouldn't [LV 7.1 Pro on WinXP for Tablet PC's]

    [LV 7.1 Pro on WinXP for Tablet PC's] I recently let a rather large LV app of mine run in the development environment while I was out for a couple of days. Upon returning I found that the app had hung for ~22 hours waiting for an answer to an Automat

  • Recovery Oracle 10g XE after disk crash

    Hi! I am trying to recover 10g XE database for my colleague and experiencing some problems that I cannot resolve. He had mayor overheating problem and disk crash due to the failure of air conditioning. OS: Windows Server 2003 DB Version: 10.2.0.1.0 A

  • JavaScript swapPhoto help

    I'm trying to modify some JavaScript. Right now, you click on a thumbnail of an image and a larger version of the image appears in a larger box to the right. I would like to change the thumbnail to text, so clicking on the text will change the image

  • Toshiba LED ?????'s

    First off thanks for everyones help so far.  I have returned 2 tv's already and i dont want to return another.  I had decided on the Samsung PN51D530A3 but i cant find one at any local BB and cant even order it online.  I then decided to go for the I

  • What do I need to do to make a forum just like this one(from apple)

    What do I need to do to make a forum just like this one(from apple)