PL/SQL Loop is too slow??

Hi all,
I have a basic loop (LOOP..END LOOP) for reading a very large varchar2 variable. In this Loop i always read 1 caracter for header (size in bytes) and the amount of data. Why this loop processing is too slow??? Can i boost it with something?
Thanx.

Function to read data on a socket
Always read 1 byte for length, and with this length,
body of message.
FUNCTION PEGA_MESG (io_Conex_Env IN OUT Utl_Tcp.connection)
RETURN LONG RAW
IS
v_Buffer LONG RAW;
v_Mesg LONG RAW;
v_intResp INTEGER;
v_intBytes INTEGER;
v_qtde_bytes integer;
BEGIN
v_Mesg := NULL;
LOOP
v_intResp := utl_tcp.read_raw(io_Conex_Env, v_Buffer, 1);
v_qtde_bytes := ascii(utl_raw.cast_to_varchar2(v_Buffer));
v_intResp := utl_tcp.read_raw(io_Conex_Env, v_Mesg, v_qtde_bytes);
EXIT WHEN v_Mesg is not null;
END LOOP;
RETURN(v_Mesg);
END;
This is a little peace of my prog. If i send a lot of data, my prog get too slow.
loop
v_Mesg := Pega_Mesg(io_conexao);
if ( v_Mesg = utl_raw.cast_to_raw('sum')) then
v_Mesg := Pega_Mesg(io_conexao);
l_Dados.DELETE;
while v_Mesg <> utl_raw.cast_to_raw('fimsum') loop
for sumario in 1..4 loop -- Loop para quantidade de campos na tabela saida_sumario
l_Dados(sumario) :=Long postings are being truncated to ~1 kB at this time.

Similar Messages

  • Pl/sql block is too slow, should  procedure a better option

    Hi all,
    how to tune A PL/SQL block that traverse cursors and fetch millions of records then execute inserts in different tables,
    using execute immediate statement.
    It's too slow and takes 10 hours to populate 40 tables having millions of records,
    as i have to do some modifications in data so can not do it by CTAS,
    i.e. a single sql statement.
    Should i make a procedure, does it help .
    Please help or suggest As i am New to PL/Sql
    My code look like,
    declare
    cursor     cur_table1 is
         select field1,field2,field3,field4 from table1;
    begin
    for i in cur_table1
    loop
         execute immediate 'insert into table2 (field1,field2,field3,field4) '||
    'select :1,field2,field3,field4 '||
    ' from table1 where field3= :2'
    using i.field1||'_'||to_char(sysdate,'ddmmyyyy hh12:mi:ss',i.field1;
    commit;
    end if;
    end;
    Thanks and Regards,

    declare
    cursor cur_projects is
         select PROJECTID, PROJECTNAME, DESCRIPTION, DELETED, DELETINGDATE, ACTIVE, ADMINONLY, READONLY, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, CREATOR, CREATED, MODIFIER, MODIFIED from projects ;
    cursor cur_projectversion(p_projectid projects.projectid%TYPE) is
         select PROJECTID, PROJECTVERSIONID, PROJECTVERSIONNAME, DESCRIPTION, DELETED , DELETINGDATE, ACTIVE , ADMINONLY, READONLY, decode(EFFECTIVEDATE,null,trunc(sysdate),EFFECTIVEDATE) EFFECTIVEDATE, EXPIRATIONDATE, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, PUBLICATIONSRCPROJVERNAME, CREATOR, CREATED, MODIFIER, MODIFIED, PROFILELOADERCLASS /*, TRACKCHANGES */
         from projectversions where PROJECTID=p_projectid ;
    cursor cur_objects(p_projectid projects.projectid%TYPE,p_projectversionid projectversions.projectversionid%TYPE) is
         select PROJECTID , PROJECTVERSIONID, OBJECTID , OBJECTKEY , PARENTID, KIND , NAME , TITLE , OWNER , CREATED, MODIFIER , MODIFIED , READY_TO_PUBLISH, LAST_PUBLISHED_DATE , LAST_PUBLISHER , EFFECTIVE_PUBLISHING_DATE , PUBLISHER , PUBLISHING_DATE /*, to_lob(scripttext) */ from OBJECTS where PROJECTID=p_projectid and PROJECTVERSIONID=p_projectversionid /*order by objectid */;
    begin
    for i in cur_projects
    loop
    dbms_output.put_line('PROJECTID => '||i.projectid);
    dbms_output.put_line('_________________________________');
    execute immediate 'insert into &TARGET_USER\.projects(locktimeout, memorysavinglevel , preloadobjects, projectid, projectname, description, deleted, deletingdate, active, adminonly, readonly, securityclass, projectcontact, defaultversion, defaultstartpage, imagepath, maxexamineerrors ) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17) '
    using i.locktimeout, i.memorysavinglevel, i.preloadobjects,i.projectid ,i.projectname , i.description , i.deleted , i.deletingdate , i.active , i.adminonly , i.readonly, i.securityclass, i.projectcontact , i.defaultversion, i.defaultstartpage , i.imagepath, i.maxexamineerrors;
    for k in cur_projectversion(i.projectid)
         loop
    for l in cur_objects(k.projectid,k.projectversionid)
              loop
                   cnt:=cnt+1;
    select count(1) into object_exists from &TARGET_USER\.objects where objectid=l.objectid and projectversionid=1 and projectid=l.projectid;
              if object_exists = 0
              then
              if l.objectid = 1 ------Book Object , objectid = 1 and parentid = 0
              then
              execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                        using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 0 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                   else
                        select count(1) into object_parentid_exists from objects where objectid=l.parentid and projectversionid=1 and projectid=l.projectid;
                        if object_parentid_exists = 0 ---Set Parentid as 1
                        then
                                  cnt_parentid_1:=cnt_parentid_1+1;
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 1 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        else
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY, PARENTID, NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY,l.PARENTID,l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        end if;
                   end if ;
         end if;
                   execute immediate 'INSERT INTO &TARGET_USER\.objectversions( PROJECTID, OBJECTID, PROJECTVERSIONID ,VERSIONNAME,OBJECTVERSIONID, REVISIONID,DESCRIPTION, TITLE , OWNER, CREATED, MODIFIER, MODIFIED, READY_TO_PUBLISH , LAST_PUBLISHED_DATE, LAST_PUBLISHER, EFFECTIVEDATE, SCRIPTTEXT, REVIEWSTATUS, READONLY, PUBLISHED, DELETED ) '||
                             'SELECT PROJECTID, OBJECTID, 1, owner||:1, PROJECTVERSIONID , '''', '''', TITLE, OWNER, CREATED, MODIFIER, MODIFIED, ''N'', '''' , '''', :2 , to_lob(SCRIPTTEXT), '''', ''N'', ''N'', '''' '||
                             'FROM OBJECTS '||
                             'WHERE PROJECTID= :3 and PROJECTVERSIONID= :4 and OBJECTID= :5'
                             using '_'||TO_CHAR(k.EFFECTIVEDATE,'DDMMYYHHMISS'),k.EFFECTIVEDATE,l.projectid,l.projectversionid,l.objectid;
         end loop;
         dbms_output.put_line(cnt||' OBJECTS, OBJECTVERIONS POPULATED');
         dbms_output.put_line(cnt_parentid_1||' DUMPED UNDER BOOK FOLDER ');
         cnt_parentid_1:=0;
         cnt:=0;
    ............

  • SQL Developer 2.1 working too slow

    Hi All,
    I am working on SQL Developer 2.1 after 3 years previously I used version 1.2. Comparatively so many changes came in 2.1 version but it is too slow and while debugging variable values not shows in Tooltip we need to depend on Smartdata tab. If any settings or patches available for following problem than please provide me.
    1.     Too slow
    2.     Variable values in Tooltip
    By
    Srinivas M. P.

    SQL Developer using 127232 KB, Free 1.32 GB
    Is there any setting is SQL Developer or you talking about my system memory.
    If you talking about my system memory than I am using 2 GB RAM in my system and system working well
    If I do anything in TOAD than TOAD is working fine but SQL Developer working too slow.
    Edited by: SrinivasMP on Feb 5, 2010 3:34 PM

  • Performance is too slow on SQL Azure box

    Hi,
    Performance is too slow on SQL Azure box (Located in Europe)
    Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS, located in India)
    SELECT * FROM TABLE_1
    Whereas, on local server it returns 500,000 rows in (30 sec.)
    SQL Azure configuration:
    Service Tier/Performance Level : Premium/P1
    DTU       : 100
    MAX DB Size : 500GB     
    Max Worker Threads : 200          
    Max Sessions     : 2400
    Benchmark Transaction Rate      : 105 transactions per second
    Predictability : Best
    Any suggestion would be highly appreciated.
    Thanks,

    Hello,
    Can you please explain in a little more detail the scenario you testing? Are you comparing a SQL Database in Europe against a SQL Database in India? Or a SQL Database with a local, on-premise SQL Server installation?
    In case of the first scenario, the roundtrip latency for the connection to the datacenter might play a role. 
    If you are comparing to a local installation, please note that you might be running against completely different hardware specifications and without network delay, resulting in very different results.
    In both cases you can use the below blog post to assess the resource utilization of the SQL Database during the operation:
    http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
    If the DB utilizes up to 100% you might have to consider to upgrade to a higher performance level to achieve the throughput you are looking for.
    Thanks,
    Jan 

  • Performance too Slow on SQL Azure box

    Hi,
    Performance is too slow on SQL Azure box:
    Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS)
    SELECT * FROM TABLE_1
    Whereas, on local server it returns 500,000 rows in (30 sec.)
    SQL Azure configuration:
    Service Tier/Performance Level : Premium/P1
    DTU       : 100
    MAX DB Size : 500GB     
    Max Worker Threads : 200          
    Max Sessions     : 2400
    Benchmark Transaction Rate      : 105 transactions per second
    Predictability : Best
    Thanks,

    Hello,
    Please refer to the following document too:
    http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Performance%20Guidance%20for%20SQL%20Server%20in%20Windows%20Azure%20Virtual%20Machines.docx
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Forall bulk delete is too slow to work,seek advice.

    I used PL/SQL stored procedure to do some ETL work. It is pick up refeshed records from staging table, then check to see whether the same record exists in target table, then do a Forall bulk deletion first, then do a Forall insert all refreshed records into target atble. the insert part is working fine. Only is the deleteion part, it is too slow to get job done. My code list below. Please advise where is the problem? Thansk.
    Declare
    TYPE t_distid IS TABLE OF VARCHAR2(15) INDEX BY BINARY_INTEGER;
    v_distid t_distid;
    CURSOR dist_delete IS
    select distinct distid FROM DIST_STG where data_type = 'H';
    OPEN dist_delete;
    LOOP
    FETCH dist_delete BULK COLLECT INTO v_distid;
    FORALL i IN v_distid.FIRST..v_distid.LAST
    DELETE DIST_TARGET WHERE distid = v_distid(i);
    END LOOP;
    COMMIT;
    end;
    /

    citicbj wrote:
    Justin:
    The answers to your questions are:
    1. why would I not use a single DELETE statement? Because this PL/SQL procedure is part of ETL process. The procedure is scheduled by Oracle scheduler. It will automatically run to refresh data. Putting DELETE in stored procedure is better to executed by scheduler.You can compile SQL inside a PL/SQL procedure / function just as easily as coding it the way you have so that's really not an excuse. As Justin pointed out, the straight SQL approach will be what you want to use.
    >
    2. The records in dist_stg with data_type = 'H' vary by each month. It range from 120 to 5,000 records. These records are inserted into target table before. But they are updated in transactional database. We need to delete old records in target and insert updated ones in to replace old ones. But the distID is the same and unique. I use distID to delete old one and insert updated records with the same distID into target again. When user run report, the updated records will show up on the report. In plain SQL statement, delete 5,000 records takes about seconds. In my code above, it take forever. The database is going without any error message. There is no trigger and FK associated
    3. Merge. I haven't try that yet. I may give a try.Quite likely a good idea based on what you've outlined above, but at the very least, remove the procedural code with the delete as suggested by Justin.
    >
    Thanks.

  • MySQL Connection too slow

    Please anyone with expirience in applets accessing MySQL Server!
    I made a test with a loop code executing queries and got the following conclusion:
    In media a query takes more than 2 seconds. I used PrepareStatement for better performance but it doesn�t make it better. I�ve heart about persistent connection. It�s realy faster? You know anything I could do for better performance? Obs.: When I ping my server the time answer is 200 ms. I�m from Brazil. But I think it sholdn�t make the mysql too slow as it is. Please help me.....
    I�m very thank for ur help..
    My java code:
         public void ConectaBD()
              try
                   Class.forName("com.mysql.jdbc.Driver").newInstance();
                   //Class.forName("org.gjt.mm.mysql.Driver").newInstance();
              catch(Exception e)
                   System.out.println("Error: " + e);
              try
                   con = DriverManager.getConnection("jdbc:mysql://labinfor.com.br/labinfor_codvirtual?user=labinfor_cliente&password=800091");
              catch(SQLException e)
                   System.out.println("Error: " + e );
                   e.printStackTrace();
    public void Logon()
                   java.util.Date start = new java.util.Date();
                   long startTime = start.getTime();
    if (ValidaCPF(TFCPF.getText()))
    queryLogon = "SELECT Nome FROM clientes WHERE CPF='" + TFCPF.getText().trim() + "' AND senha='" + TFSenha.getText() + "'";
    int indCon=0;
    Timer t = new Timer(1000, this);
    t.start();
    while(indCon<100)
         Conecta();
         try
                   stmt[indStmt] = con.createStatement();
         rsLogon = stmt[indStmt].executeQuery(queryLogon);
         Achou = "false";
         while (rsLogon.next())     
         Achou = "true";
         StrNome = rsLogon.getString(1);
         LValidaCPF.setText("Bem-vindo " + StrNome);
         TFCPF.disable();
         TFSenha.disable();
         BLogon.setLabel("Logoff");
                                       System.out.println("delay["+indCon+"]= "+System.currentTimeMillis());
         if (Achou == "false")
         LValidaCPF.setText("CPF ou senha incorreto");
         Desconecta();
         catch(Exception e)
         System.out.println("Error: " + e);
         indCon+=1;
    }

    Connection times of 1 to 5 seconds are normal. Pool your connections.
    To understand why:
    First, setting up a TCP/IP connection takes roughly 50% longer than a ping; a ping is one packet to the server and one packet back, a TCP/IP handshake to create a connection is one packet to the server, one back, and another to the server. Then you have a required brief wait before the connection is considered complete; we'll ignore the wait and call it .3 seconds for the TCP/IP handshake.
    Second, the database has to go through the login process. Just sending the login request and response is yet another packet round trip; in your case, that's another .2 seconds in just network time alone. With MySQL, the login involves checking the security of the connection and the user; it often requires a reverse DNS lookup on the IP address of the client. Probably 4 or 5 tables have to be queried and a user environment is created within the server. The server work for the login might take .75 seconds, at a guess. Adding the network time makes this step .95 seconds or so.
    Third, there's the time it takes to do the query itself; another network round trip; .2 second network round trip, plus maybe .1 second to run the query on the database, for a total of .3 seconds.
    Fourth, there's the time it takes to tear down the TCP/IP connection on connection close. That involves 4 packets, 2 round-trips, so add another .4 seconds.
    Adding it all up, you get .3 + .95 + .3 + .4 = 1.95 seconds, of which .3 is your SQL query (network and DB time), .9 is network time for TCP/IP connection and login request, and .75 is DB time for the login.

  • PL/SQL procedure is 10x slower when running from weblogic

    Hi everyone,
    we've developed a PL/SQL procedure performing reporting - the original solution was written in Java but due to performance problems we've decided to switch this particular piece to PL/SQL. Everything works fine as long as we execute the procedure from SQL Developer - the batch processing 20000 items finishes in about 80 seconds, which is a serious improvement compared to the previous solution.
    But once we call the very same procedure (on exactly the same data) from weblogic, the performance seriously drops - instead of 80 seconds it suddenly runs for about 23 minutes, which is 10x slower. And we don't know why this happens :-(
    We've profiled the procedure (in both environments) using DBMS_PROFILER, and we've found that if the procedure is executed from Weblogic, one of the SQL statements runs noticeably slower and consumes about 800 seconds (90% of the total run time) instead of 0.9 second (2% of the total run time), but we're not sure why - in both cases this query is executed 32742-times, giving 24ms vs. 0.03ms in average.
    The SQL is
    SELECT personId INTO v_personId FROM (            
            SELECT personId FROM PersonRelations
            WHERE extPersonId LIKE v_person_prefix || '%'
    ) WHERE rownum = 1;Basically it returns an ID of the person according to some external ID (or the prefix of the ID). I do understand why this query might be a performance problem (LIKE operator etc.), but I don't understand why this runs quite fast when executed from SQL Developer and 10x slower when executed from Weblogic (exactly the same data, etc.).
    Ve're using Oracle 10gR2 with Weblogic 10, running on a separate machine - there are no other intensive tasks, so there's nothing that could interfere with the oracle process. According to the 'top' command, the wait time is below 0.5%, so there should be no serious I/O problems. We've even checked JDBC connection pool settings in Weblogic, but I doubt this issue is related to JDBC (and everything looks fine anyway). The statistics are fresh and the results are quite consistent.
    Edited by: user6510516 on 17.7.2009 13:46

    The setup is quite simple - the database is running on a dedicated database server (development only). Generally there are no 'intensive' tasks running on this machine, especially not when the procedure I'm talking about was executed. The application server (weblogic 10) is running on different machine so it does not interfere with the database (in this case it was my own workstation).
    No, the procedure is not called 20000x - we have a table with batch of records we need to process, with a given flag (say processed=0). The procedure reads them using a cursor and processes the records one-by-one. By 'processing' I mean computing some sums, updating other table, etc. and finally switching the record to processed=1. I.e. the procedure looks like this:
    CREATE PROCEDURE process_records IS
        v_record records_to_process%ROWTYPE;
    BEGIN
         OPEN records_to_process;
         LOOP
              FETCH records_to_process INTO v_record;
              EXIT WHEN records_to_process%NOTFOUND;
              -- process the record (update table A, insert a record into B, delete from C, query table D ....)
              -- and finally mark the row as 'processed=1'
         END LOOP;
         CLOSE records_to_process;
    END process_records;The procedure is actually part of a package and the cursor 'records_to_process' is defined in the body. One of the queries executed in the procedure is the SELECT mentioned above (the one that jumps from 2% to 90%).
    So the only thing we actually do in Weblogic is
    CallableStatement cstmt = connection.prepareCall("{call ProcessPkg.process_records}");
    cstmt.execute();and that's it - there is only one call to the JDBC, so the network overhead shouldn't be a problem.
    There are 20000 rows we use for testing - we just update them to 'processed=0' (and clear some of the other tables). So actually each run uses exactly the same data, same code paths and produces the very same results. Yet when executed from SQL developer it takes 80 seconds and when executed from Weblogic it takes 800 seconds :-(
    The only difference I've just noticed is that when using SQL Developer, we're using PL/SQL notation, i.e. "BEGIN ProcessPkg.process_records; END;" instead of "{call }" but I guess that's irrelevant. And yet another difference - weblogic uses JDBC from 10gR2, while the SQL Developer is bundled with JDBC from 11g.

  • Bumblebee performance is too slow

    Hi everyone,
    This is my second post in this forum and it has been only 2 days that I met Arch. Before, I was using Ubuntu for 3 years, but due to low performance in my PC, unfortunately, I decided to say good bye which was hard to say.
    Now, I am trying to have the same setup as my previous laptop and Bumblebee was one of them. I followed the instructions here: https://wiki.archlinux.org/index.php/Bumblebee.
    It seems that bumblebee is installed and working, however, the FPS is too slow:
    $ optirun glxspheres64 -info
    Polygons in scene: 62464
    Visual ID of window: 0x20
    Context is Direct
    OpenGL Renderer: GeForce GT 520MX/PCIe/SSE2
    0.023848 frames/sec - 0.021114 Mpixels/sec
    Without optirun I get:
    $ glxspheres64 -info
    Polygons in scene: 62464
    Visual ID of window: 0x20
    Context is Direct
    OpenGL Renderer: Mesa DRI Intel(R) Sandybridge Mobile
    0.033237 frames/sec - 0.029426 Mpixels/sec
    0.029968 frames/sec - 0.026533 Mpixels/sec
    This is impossible as I was getting very good results before.
    I am wondering if I did something wrong, or missed anything.
    Just for information, system specs:
    Intel i7 2670QM 2.2 GHZ
    4 GB RAM
    1 GB GeForce GT 520MX
    512 MB Intel Graphics
    I played 0ad with and without optirun and the performance was good in both of them, but not sure if it switches the video cards itself.
    I also have bbswitch installed.
    Any help would be appreciated. Thank you.
    Last edited by wakeup12 (2014-09-27 14:29:19)

    Hello,
    Can you please explain in a little more detail the scenario you testing? Are you comparing a SQL Database in Europe against a SQL Database in India? Or a SQL Database with a local, on-premise SQL Server installation?
    In case of the first scenario, the roundtrip latency for the connection to the datacenter might play a role. 
    If you are comparing to a local installation, please note that you might be running against completely different hardware specifications and without network delay, resulting in very different results.
    In both cases you can use the below blog post to assess the resource utilization of the SQL Database during the operation:
    http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
    If the DB utilizes up to 100% you might have to consider to upgrade to a higher performance level to achieve the throughput you are looking for.
    Thanks,
    Jan 

  • Disk is too slow (Record)(-10004) error..so sick of this.

    Hello all,
    I can no longer record more than three tracks in logic without getting the error message "Disk is too slow (Record)(-10004)". When this happens, recording is stopped.
    At first I suspected my drive was faulty, maybe slowing down. So being in the middle of a session that took me 2 hours to set up, I called for a break and rushed off and bought an external Seagate 7200rpm firewire 800 drive. I installed it and set it as the recording path for the project. There was no change, the same error occured.
    I then switched the target drive to another internal one I use for Time Machine - same problem occured.
    It seems to me that this problem has nothing to do with my drives. I am at a loss to explain it. I have looked for hours online for a solution but while many have experienced this there seem to be few answers out there.
    Unless I find a solution this will be my last project with Logic. I tried and tried for the last 5 years to use this program but things like this keep happening. It's glitchy with UAD cards, Duende, RME interfaces, Midi controllers, Hard Drives, RAM and external clocks. I've had problems with them all over the years. I will most likely switch to Cubase which I feel is inferior for editing and loops, but at least it seems to be stable.
    If anyone has any insight I'll try and fix it, but I just can't keep shelling out money for a program that just doesn't work.

    I am experiencing a similar problem & have been receiving the same messages, even while recording as little as one track and playback has become an issue as well. However, THIS WAS NOT ALWAYS THE CASE. I have heard of people with this same problem, where they receive this message out of nowhere after logic has been working perfectly for them.
    I also would like to note that I am running all settings in logic for optimized recording and playback (audio & buffer settings etc etc etc)
    THIS IS NOT A HARDWARE ISSUE, at least in my situation as I am running a fast internal HD & have ample memory. Please reach out if you feel like you have a pragmatic solution to this issue.
    This may be a possible lead on the fix... I remember reading this post from a user "soundsgood" in 2008 who was having a similar issue..I don't understand completely his solution, but if someone could enlighten me, I feel that this might be the solution to our issue
    +"Okay - forgive me 'cause I'm a newbie on this forum and if somebody else has already figgered this out, I'm sorry.... I've been having the same problem all of the sudden after many years of crash-free and error-free recording. I've read everything. I've pulled my hair out. I've done dozens of clean installs. I've repaired permissions so many times I can do it blindfolded. And sitting here tonight, it dawned on me.... there are TWO places Logic is sucking data from: wherever you've got your SONG files stashed, of course.... but it ALSO NEEDS TO ACCESS THE STARTUP DRIVE (or wherever else you might have your Apple Loops installed). I was watching my drives being accessed during a playback of a fairly complicated tune (most tracks were frozen of course), and both of the afore-mentioned drives were going berserck with accesses. We're all focusing on our dedicated audio drives, but forgetting about our boot drives (where Logic usually resides along with most or all of our loops). I carbon copy cloned my boot / operating system (including Logic) to a different (in my case, an external firewire) drive and the problem disappeared. Could've been because the cloning process de-fragged all the loops & stuff, or maybe my OS just likes snatching its sample/loop info from an external drive. Worked for me... so far....... let me know if it works for others....."+

  • Insert or Update Item Master is too slow

    Dear Expert,
    I have a problem in add or update Item Master Data. When I click Add/Update, the process saving is too slow (need more than 5 minutes). This condition appear yesterday. In 2 days ago process add/update item master data normal.
    But the others transaction like business process, transaction is normal.
    I have did shrink database in SQL, but not help.
    Does anybody help my problem? how should I do?
    Thank you before help me

    Hi pak Hendra,
    May I know how much is your livedb size now ? If it is still slow after reindexing, you may consider to do these followings suggestions:
    1. Increase your server hdd size
    2. Increase your server's processor
    3. cut off and do new opening balance
    Is the problem happened in branch/outlet ? do you use citrix to access the server from branch ? Of course you can reindex the db to solve this issue but it is not the only one solution. You must also concern about the connection.
    Rgds,
    JimM@sbo_knowledge_village

  • Mail server is too slow to deliver the mail to internal domain

    Hi,
    My mail server faster enough to send the mails to other domains, but when i try to send mail to my own domain it too slow some time it take 30 t0 40 minutes to deliver the mail.
    Please help
    Thanks,
    Gulab Pasha

    You should use statspack to check what are the main waits.
    Some indicators to check :
    - too many fts/excessive IO => check sql statements (missing index, wrong where clause)
    - explain plan for most important queries : using cbo or rbo ? If cbo, statistics should be up to date. If rbo, check acces path.
    -excessive logfile switch (> 5 per hour) : increase logfile or disable logging
    - undo waits => not enough rollback segments (if you don't set AUM)
    - data waits => alter initrans, pctfree, pctused
    - too many chaining rows => rebuild set of datas or rebuild table
    - too many levels in indexes => rebuild index
    - excessive parsing : use bind variable or alter parameter cursor_sharing
    - too many sort on disks => increase sort_area_size and create others temporary tablespace on separate disks
    - too many blocks reads for a row => db_block_size too few or too many chaining rows
    - too many lru contention => increase latches
    - OS swapping/paging ?
    Too improve performance :
    - alter and tune some parameters : optimizer_mode, sort_area_size, shared_pool_size, optimizer_index_cost_adj, db_file_multiblock_read_count...
    - keep most useful packages in memory
    - gather regularly statistics (if using cbo)
    How do your users access to the db ?
    Jean-François Léguillier
    Consultant DBA

  • Oracle 10g direct path write too slow

    Hi All,
    We have Oracle 10g on a Solaris virtual server, VMWare ESXi being the host. Data files are on RAID1, internal storage on a HP DL585 with VMFS partition at ESXi level. Problem is that DB writes for a CREATE TABLE as SELECT... statement is way too slow. To create a table which is 0.5 GB, DB takes 9 minutes which amounts to 1 MB/s. When we check for FTP or file copy at Solaris level with same size file (0.5 GB), it flies through in less than a minute. This is Oracle 10.2.0.4, 8K data block, 2 vCPU assigned to the Solaris VM. Have checked with VMWare support for any known issues and also have SR open with Oracle for any param changes that can help speed up things. Any clues or pointers from you all will be of great help.
    Thanks,
    Nikhil

    Here's the output from tkprof for waits
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    single-task message 1 0.17 0.17
    SQL*Net message to dblink 150 0.00 0.00
    SQL*Net message from dblink 150 0.04 0.32
    SQL*Net message to client 1 0.00 0.00
    direct path write temp 4003 1.16 804.93
    direct path read temp 2563 0.14 35.86
    SQL*Net more data from dblink 126967 0.17 11.81
    SQL*Net message from client 1 17.73 17.73
    Direct Path write temp has total waits of 804.93. Also, I am NOT looking to tune a particular SQL. Database is overall slow on VMware and I am looking for any gotchas for running Oracle 10g within a Solaris VM.
    Thanks,
    Nikhil

  • EXPDP is too slow even though the value of cursor_sharing changed to EXACT.

    Hi
    We are having a 10g standarad edition database (10.2.0.4) on Solaris 5 which is RAC with ASM. Infact we are planning to migrate it to LINUX x86-64 and to 11.2.0.3. The database size is around 1.3 TB. We are planning to go with an expdp backup and impdp to new server and new version database.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Release 10.2.0.4.0 - Production
    PL/SQL Release 10.2.0.4.0 - Production
    CORE 10.2.0.4.0 Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL> !uname -a
    SunOS ibmxn920 5.10 Generic_127128-11 i86pc i386 i86pc
    As per the plan I started the expdp. But unfortunately the processing of tables even continued for one and half days and the backup didnt started also. After going through few docs I found that the CURSOR_SHARING should be EXACT to make the expdp more faster (Previously it was SIMILAR). So I changed the parameter to EXACT in one of the node and started the backup again yesterday night on the same node where I change the parameter. When today I came back still the processing going on. I checked the job status and found that the table processing is still going. It is not hanged at all. But its too slow.
    What could be the reason. Here is the memory details and kernal parameter details.
    Mem
    Memory: 24G phys mem, 6914M free mem, 31G swap, 31G free swap
    Kernal Parameters
    forceload: sys/msgsys
    forceload: sys/semsys
    forceload: sys/shmsys
    set noexec_user_stack=1
    set msgsys:msginfo_msgmax=65535
    set msgsys:msginfo_msgmnb=65535
    set msgsys:msginfo_msgmni=2560
    set msgsys:msginfo_msgtql=2560
    set semsys:seminfo_semmni=3072
    set semsys:seminfo_semmns=6452
    set semsys:seminfo_semmnu=3072
    set semsys:seminfo_semume=240
    set semsys:seminfo_semopm=100
    set semsys:seminfo_semmsl=1500
    set semsys:seminfo_semvmx=327670
    set shmsys:shminfo_shmmax=4294967295
    set shmsys:shminfo_shmmin=268435456
    set shmsys:shminfo_shmmni=4096
    set shmsys:shminfo_shmseg=1024
    set noexec_user_stack = 1
    set noexec_user_stack_log = 1
    #Non-administrative users cannot change file ownership.
    rstchown=1
    Do I need to make changes above of these. The dump is taking to local file system.

    Hi,
    I'd be looking at doing this in parallel over a database link and completely miss out sending anything to nfs - it will make the whole process quicker (you effectively skip the export part and everything is an import into the new instance).
    I ran a 600GB impdp in this way over a db link and it maybe took 12 hours (can't remember exactly) - a lot of that time is index build in the new database so make sure your pga etc is set up correctky for that.
    LOB data massively slows down datapump so that could be the issue here also. You should be able to acheive the whole process in less than a day (if you have no lobs...)
    Cheers,
    Harry

  • Queue is too slow while publishing the projects

    Hi All
    I want one suggestion from MS Project Server Experts.
    I have an environment, there
    no.
    of Project is more than 5k and no of users 400 because of this performance of the queue is too slow. Please suggest some resolution to overcome
    for the same.
    FYR:
    We have 4 Application Server and each server having 32, 14, 32, 20 GB RAM Size (All are in load
    balancer )
    just for
    test I have deleted most of the project in our Development
    environment then I  have seen performance is
    increase but problem is we can't delete Project from Production Server.
    Regards, Pankaj Waghmare - MCTS | Consultant

    Pankaj,
    It seems like you have modeled your farm for a large dataset, so 5000 projects should not be a big deal. 
    My guess is that your bottle neck is on the SQL side. Have you set up maintenance plans for your Project Server Databases? http://technet.microsoft.com/en-us/library/cc973097(v=office.14).aspx
    Also check your queue settings. You might be able to increase the threads with such a horse power on App servers. 
    One final thing, Do you have enough temp dbs?
    Create Additional TempDB Files
    Both Project Server 2010 and Microsoft SharePoint Server 2010 make extensive use of TempDB during SQL transactions. To improve performance, create additional TempDB files. To optimize performance,
    create an additional TempDB file for each processor (core) in the computer running SQL Server. Create the files on a separate partition from other database files.
    Prasanna Adavi,PMP,MCTS,MCITP,MCT http://thinkepm.blogspot.com

Maybe you are looking for

  • MM - Pedidos de Compra - Cambio de Cuenta de Imputación

    Buenos días consultores MM. Tengo una consulta SOS para Uds. Sucede que tengo un Pedido de Compras de muchas posiciones que fue creado imputándose a una cuenta contable X. Este pedido tuvo algunas recepciones de factura por MIRO en todas sus posicion

  • Can't get pictures on iPod...

    So I've never had problems getting pictures on my iPod when I had the older version of iTunes. The problem is, my sister got an iPod and it required the new version of iTunes, so we upgraded... Now, I can't seem to get the photos I want on my iPod! I

  • Regarding creation of business partner.

    Hi, when we are creating a business partner,how to get the different fields like person,organisation etc.What all bapi's or function modules can be used. can anyone help me in this regards? Thanks in advance, shwetha

  • Help needed in message mapping

    Hi, We have a Flat file message coming in with the structure mentioned below: The segments *Record, ItemData1 and ItemData2* are flat structures of the *Recordset*. Recordset - Occurs 1 -- Record    - Occurs N -- ItemData1 - Occurs N -- ItemData2 - O

  • Is the purchased version of Lightroom 4 the same as the free trial version?

    When I watch the tutorial videos about LR4 they show things I can't find on my trial version. I'm wondering if there's a difference or if I just don't know how to use it correctly.