Performance is too slow on SQL Azure box

Hi,
Performance is too slow on SQL Azure box (Located in Europe)
Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS, located in India)
SELECT * FROM TABLE_1
Whereas, on local server it returns 500,000 rows in (30 sec.)
SQL Azure configuration:
Service Tier/Performance Level : Premium/P1
DTU       : 100
MAX DB Size : 500GB     
Max Worker Threads : 200          
Max Sessions     : 2400
Benchmark Transaction Rate      : 105 transactions per second
Predictability : Best
Any suggestion would be highly appreciated.
Thanks,

Hello,
Can you please explain in a little more detail the scenario you testing? Are you comparing a SQL Database in Europe against a SQL Database in India? Or a SQL Database with a local, on-premise SQL Server installation?
In case of the first scenario, the roundtrip latency for the connection to the datacenter might play a role. 
If you are comparing to a local installation, please note that you might be running against completely different hardware specifications and without network delay, resulting in very different results.
In both cases you can use the below blog post to assess the resource utilization of the SQL Database during the operation:
http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
If the DB utilizes up to 100% you might have to consider to upgrade to a higher performance level to achieve the throughput you are looking for.
Thanks,
Jan 

Similar Messages

  • Performance too Slow on SQL Azure box

    Hi,
    Performance is too slow on SQL Azure box:
    Below query returns 500,000 rows in 18 Min. on SQL Azure box (connected via SSMS)
    SELECT * FROM TABLE_1
    Whereas, on local server it returns 500,000 rows in (30 sec.)
    SQL Azure configuration:
    Service Tier/Performance Level : Premium/P1
    DTU       : 100
    MAX DB Size : 500GB     
    Max Worker Threads : 200          
    Max Sessions     : 2400
    Benchmark Transaction Rate      : 105 transactions per second
    Predictability : Best
    Thanks,

    Hello,
    Please refer to the following document too:
    http://download.microsoft.com/download/D/2/0/D20E1C5F-72EA-4505-9F26-FEF9550EFD44/Performance%20Guidance%20for%20SQL%20Server%20in%20Windows%20Azure%20Virtual%20Machines.docx
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • Bumblebee performance is too slow

    Hi everyone,
    This is my second post in this forum and it has been only 2 days that I met Arch. Before, I was using Ubuntu for 3 years, but due to low performance in my PC, unfortunately, I decided to say good bye which was hard to say.
    Now, I am trying to have the same setup as my previous laptop and Bumblebee was one of them. I followed the instructions here: https://wiki.archlinux.org/index.php/Bumblebee.
    It seems that bumblebee is installed and working, however, the FPS is too slow:
    $ optirun glxspheres64 -info
    Polygons in scene: 62464
    Visual ID of window: 0x20
    Context is Direct
    OpenGL Renderer: GeForce GT 520MX/PCIe/SSE2
    0.023848 frames/sec - 0.021114 Mpixels/sec
    Without optirun I get:
    $ glxspheres64 -info
    Polygons in scene: 62464
    Visual ID of window: 0x20
    Context is Direct
    OpenGL Renderer: Mesa DRI Intel(R) Sandybridge Mobile
    0.033237 frames/sec - 0.029426 Mpixels/sec
    0.029968 frames/sec - 0.026533 Mpixels/sec
    This is impossible as I was getting very good results before.
    I am wondering if I did something wrong, or missed anything.
    Just for information, system specs:
    Intel i7 2670QM 2.2 GHZ
    4 GB RAM
    1 GB GeForce GT 520MX
    512 MB Intel Graphics
    I played 0ad with and without optirun and the performance was good in both of them, but not sure if it switches the video cards itself.
    I also have bbswitch installed.
    Any help would be appreciated. Thank you.
    Last edited by wakeup12 (2014-09-27 14:29:19)

    Hello,
    Can you please explain in a little more detail the scenario you testing? Are you comparing a SQL Database in Europe against a SQL Database in India? Or a SQL Database with a local, on-premise SQL Server installation?
    In case of the first scenario, the roundtrip latency for the connection to the datacenter might play a role. 
    If you are comparing to a local installation, please note that you might be running against completely different hardware specifications and without network delay, resulting in very different results.
    In both cases you can use the below blog post to assess the resource utilization of the SQL Database during the operation:
    http://azure.microsoft.com/blog/2014/09/11/azure-sql-database-introduces-new-near-real-time-performance-metrics/
    If the DB utilizes up to 100% you might have to consider to upgrade to a higher performance level to achieve the throughput you are looking for.
    Thanks,
    Jan 

  • IR Report found 1 million record with blob files performance is too slow!

    we are using
    oracle apex 4.2.x
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
    mod_plsql with Apache
    Hardware: HP proliant ML350P
    OS: WINDOWS 2008 R2
    customized content management system developed in apex.when open the IR report have 1 ml rows found and each rows have blob(<5MB as pdf/tiff/bmp/jpg) it will be raising rows in future! but the searching performance is very slow!
    how can increasing the performance?
    how can showing progressing status to user while searching progress going on IR report itself?
    Thanx,
    Ram

    It's impossible to make definitive recommendations on performance improvement based on the limited information provided (in particular the absence of APEX debug traces and SQL execution plans), and lacking knowledge of the application  requirements and access to real data.
    As noted above, this is mainly a matter of data model and application design rather than a problem with APEX.
    Based on what has been made available on apex.oracle.com, taking action on the following points may improve performance.
    I have concerns about the data model. The multiple DMS_TOPMGT_MASTER.NWM_DOC_LVL_0x_COD_NUM columns are indications of incomplete normalization, and the use of the DMS_TOPMGT_DETAILS table hints at an EAV model. Look at normalizing the model so that the WM_DOC_LVL_0x_COD_NUM relationship data can be retrieved using a single join rather than multiple scalar subqueries. Store 1:1 document attributes as column values in DMS_TOPMGT_MASTER rather than rows in DMS_TOPMGT_DETAILS.
    There are no statistics on any of the application tables. Make sure statistics are gathered and kept up to date to enable the optimizer to determine correct execution plans.
    There are no indexes on any of the FK columns or search columns. Create indexes on FK columns to improve join performance, and on searched columns to improve search performance.
    More than 50% of the columns in the report query are hidden and not apparently used anywhere in the report. Why is this? A number of these columns are retrieved using scalar subqueries, which will adversely impact performance in a query processing 1 million+ rows. Remove any unnecessary columns from the report query.
    A number of functions are applied to columns in the report query. These will incur processing time for the functions themselves and context switching overhead in the case of the non-kernel dbms_lob.get_length calls. Remove these function calls from the query and replace them with alternative processing that will not impact query performance, particularly the use of APEX column attributes that will only apply transformations to values that are actually displayed, rather than to all rows processed in the query.
    Remove to_char calls from date columns and format them using date format masks in column attributes.
    Remove decode/case switches. Replace this logic using Display as Text (based on LOV, escape special characters) display types based on appropriate LOVs.
    Remove the dbms_lob.get_length calls. Instead add a file length column to the table, compute the file size when files are added/modified using your application or a trigger, and use this as the BLOB column in the query.
    Searching using the Search Field text box in the APEX interactive report Search Bar generates query like:
    select
    from
      (select
      from
        (...your report query...)
      ) r
      where ((instr(upper("NWM_DOC_REF_NO"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_DESC"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("SECTION_NAME"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("CODE_TYPE"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("REF_NUMBER_INDEX"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("DATE_INDEX"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("SUBJECT_INDEX"), upper(:apxws_search_string_1)) > 0
      or instr(upper("NWM_DOC_SERIEL"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_DESCRIPTION"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_STATUS"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("MIME_TYPE"), upper(:APXWS_SEARCH_STRING_1)) > 0
      or instr(upper("NWM_DOC_FILE_BINARY"), upper(:APXWS_SEARCH_STRING_1)) > 0 ))
      ) r
    where
      rownum <= to_number(:APXWS_MAX_ROW_CNT)
    This will clearly never make use of any available indexes on your table. If you only want users to be able to search using values from 3 columns then remove the Search Field from the Search Bar and only allow users to create explicit filters on those columns. It may then be possible for the optimizer to push the resulting simple predicates down into the inlined report query to make use of indexes on the searched column.
    I have created a copy of your search page on page 33 of your app and created an After Regions page process that will create Debug entries containing the complete IR query and bind variables used so they can be extracted for easier performance analysis and tuning outside of APEX. You can copy this to your local app and modify the page and region ID parameters as required.

  • Camileo X Sports - SD card performance issue - too slow

    Hello,
    can anybody help me please.....when i tun on the cam and make the FIRST record it always shows me that the speed card is too slow, but i have sandisk 64gb extreme with 48mbs.....but it only dissapears at the first record and not the following.
    what can i do....i have also the newest firmware.

    yup! I had the same problem at first, i have the Samy EVO class 10 32G SDHC.
    It's usually brilliant, actually it works fine with the 50 fps, 30 fps and even the 720p 120 fps.
    However i found the problem easier than most people simply due to sheer coincidence.
    I left the screen on while recording and right before i stopped recording (btw it never froze in my case it just stopped recording) and the screen displayed "low speed card"! what do you know!
    I researched the issue and it turns out the Samsun EVO was a bit slower than what the camera required at 60 fps. at 50 fps it barely stopped recording once.
    SanDisk Micro SDHC Extreme Class 3 32GB SDSDQXN-032G-G46A
    This card is a relatively cheap one, it is a UHS class 3 which mean it's an Ultra High Speed card. The class 3 is a great ranking. It's like 10 Euros more expensive than the Samsun and from what i've read works great with the GoPro hero 4 while shooting 4K & 1080p at 120 fps so it should hold up very well.
    It was never the upgrade or downgrade of the software. Lucky i left the screen on!

  • SSRS performance is too slow

    Hi,
    I am using SQL Server 2008 R2 SP1. This simple query, "SELECT CustomerID FROM Customer", returns 42900 records. When It is executed within SQL Server Management Studio (SSMS), it literally takes zero (0) second.  However, when this
    same query is executed within SQL Server Reporting Services (SSRS) to feed a parameter, it takes about two (2) minutes to execute. This same performance issue occurs when the report calls the stored procedure which serves as the main dataset for
    the report_ on SSMS the stored procedure takes under 3 seconds; on SSRS about 7 minutes. 
    Would you please help me Identify the source of the problem, and the possible solution?
    Thank you!

    Thank you all for your comments.
    Andrew,
    The "Executionlog3" view yields this info:
    <AdditionalInfo>
      <ProcessingEngine>2</ProcessingEngine>
      <ScalabilityTime>
        <Pagination>0</Pagination>
        <Processing>0</Processing>
      </ScalabilityTime>
      <EstimatedMemoryUsageKB>
        <Pagination>18</Pagination>
        <Processing>14807</Processing>
      </EstimatedMemoryUsageKB>
      <DataExtension>
        <SQL>1</SQL>
      </DataExtension>
    </AdditionalInfo>
    and
    TimeProcessing = 1768;  TimeRendering = 677; ByteCount = 88353
    The performance issue happens when the report is reading from SQL Server database, even for a simple query such as "SELECT DISTINCT CustomerID FROM Customer".  This exact same query takes literaly zero (0) second to fetch results when run
    directly within SQL Server Management Studio.
    Any other suggestions, please?
    Thank you!

  • Sales & Order report performance is too slow!

    Hi All,
    Sales report is prety slow.It took 90% in OLAP time. I tried RSRT all the possible ways & also created OLAP fill query level also but No result. Please help me.
    Thanks
    Vasu.

    Hi Vasu,
    CAN You please refer below link
    http://wiki.sdn.sap.com/wiki/display/BI/HowtoImproveQueryPerformance-A+Checklist
    it tells explains how you can improve your query performace.
    Also as i told since you are FETCHING DATA direct from MASTER data report will be bit slow so try to get data directly from cube and may be u can use filters also.
    Hope this may help you.
    Regards
    Nilesh

  • Crystal Reports performance is too slow

    Dear SDNers,
    I have designed a crystal report which is fetching data from a Z developed function module. From crystal report side, used filters.But while executing report, it is calling the function module multiple times.
    due to this performance is very bad. Why report is calling mutiple times of the function module. Do need to modify from crystal report side or from function module side. Please clarify.
    Regards,
    Venkat

    A similar issue was seen a year ago It was regarding a function module call being executed multiple times from  CR4Ent tool. It involved the usage of sub reports inside that report and the issue was generic for any function module used for testing.
    At that time, the issue was resolved by upgrading to the latest available patch of CR4Ent and also by applying the latest patch at SAP R3 end.
    If you are able to post the exact support package and patch level of CR4Ent and also for the SAP R3 system, then someone can tell whether its the latest or not.
    -Prathamesh

  • BI Server performance too slow

    Hi experts,
    I am facing an issue, my OBIEE sits on a Unix Platform. And many time the server performance is too slow, when I check the processes, Java under Orabi user is using more than 50% of the CPU utilization. I need to find the root cause behind it, as it is the production system and is affecting many users.
    Thanks in advance.
    MT

    Thanks Prassu,
    Yes the Server is performing too much calculations, but they are required as need to cache reports at daily level also,
    I have an observation that the MAX_CACHE_ENTRY_SIZE [MAX_CACHE_ENTRY_SIZE = 1 MB;] parameter is set only for 1 MB, I can understand that if any query fetching more data than 1 MB will not be cached. But is there any chance that this will affect the server as well.
    Thanks
    MT

  • Application response too slow- How to resolve (webLogic)

    Hi All, Can you please let me know what are all the main areas that i need to concentrate in the weblogic application server if i get the complaint from the clients saying "application performance is too slow".
    Edited by: user11361691 on Apr 21, 2010 12:18 AM

    you need to check the following:
    1: Check the server log files to find any errors at the time when the Client is complaining about the server performance.
    2: You can enable the GC logs in the server log and can check whether the there is any issues with the memory usage of the server. From the GC logs you can that whether the time has been taken to to the GC and hence you can try allocating higher memory or you can try changing the GC algorithm according to the analysis.
    For details refer the following link:
    http://download.oracle.com/docs/cd/E13222_01/wls/docs100/perform/topten.html
    http://download.oracle.com/docs/cd/E13222_01/wls/docs100/perform/JVMTuning.html
    3: Finally you can collect the thread dumps snap shots and can see whether there are any stuck threads or deadlock situation in the server at the time of slow performance.

  • Why is performance so slow reading binary data from a SQL Azure DB with EF6.x

    I'm running a WPF client that hits a SQL Azure DB using EF 6.x. For the most part, everything seems to be working fine. The one exception is when I try to read a large binary column.
    I am storing files in the DB as a binary column.  When I test using the local DB, everything sings.  When I switch to the Azure DB, I get timeouts when I try to read the file contents.  I have no problem saving the binary data to the DB, just
    reading it.
    I don't know how to troubleshoot this.  I looked at the Query Performance page in the Azure portal, but it doesn't time stamp anything in there and you can't clear it, so I can't correlate what's running with the queries that show up there.
    I tried to start SQL profiler against the DB, but was denied because I'm not a member of the sysadmin fixed server role.
    If I query for the data directly, it comes back quickly.  So this seems to be an Azure via EF issue.
    Any help is appreciated.
    http://digitalcamel.blogspot.com/

    Hi Digital Camel,
    Since I don't know what your scenario is, I won't argue too much about not storing binaries in your SQL DB, but still: don't store binaries in your SQL DB :). The main reason is simple: first and foremost, in both the current and future pricing tier your
    levels are defines on the size of the DB. Basically, you pay way more by storing your binaries on your SQL layer rathern than storing them elsewhere, such as Azure Storage. Second, the protocol your binaries would be downloaded over the wire is prone to network
    connectivity issues: you could use HTTP(S) or FTP instead, if you'd use Azure Storage. Last but not least, when you download the binary from your DB, you keep a connection open which in the end is a connection other users might have used to query data instead.
    However, in regard to your question, how did you "query for the data directly"? Did you try to query the data using SSMS with the Client Statistics option on? This could tell you if the problem is network, server or client related.
    Hope this helps!
    Alex

  • SQL Azure very slow

    Hi Experts
    We have SQL Azure database that is running very slow, so i took a backup of the database and restored on my on premise sql server express edition. 
    I tried a update statement on a logs tables that have approximately about 1,062,367 records.
    UPDATE logs set FileId = NULL
    where LEN(FileId) < 36
    The above update statement updates about 910,593 records on localhost host in 20 sec and if i run the same updates statement on SQL Azure it took 40mins and 16sec. The update statement was issues using SQL Server management studio installed on my local machine.
    With reference to internet connection, we have fiber connection with download speed of 70.62mbps and upload speed of 84.09mbps 
    I am not sure whats going wrong with SQL Azure database i.e. could there be a specific fault on my database at Microsoft side etc.
    Any advice or suggestion will be highly appreciated.
    Kind Regards
    Bhavesh
    Bhavesh

    I have waited for 40 min for the update to complete and it has successfully and updated logs table so i can't re-run the update statement again because there will be no records to update.
    I have managed to find another example that took about 10 mins to complete on the SSMS but when ran the same query on the azure management portal it took only 6 secs which puzzled me. 
    Here is the query: 
    select   
     l.Value3 as ClientReference,
     l.[Key] as ActivityUser,
     l.Value2 as FormDisplayName,
     l.Value6 as Status,
     l.DateStamp as LastUpdated,
     l.[Action] as FileDescription,
     l.CompanyId as ClientID,
     l.FileId as FileID,
     f.id
    from Logs l
    LEFT OUTER JOIN files f on f.Id = l.FileId 
    where l.AccountId = 578
    and type = 1
    The above query returns about 3694 records only.
    Here is the details of the query performance 
    Azure Management Portal
    Duration(ms): 6487
    CPU(ms): 2443
    Logical Reads: 74729
    Physical Reads: 66147
    Logical Writes: 0
    SSMS
    Duration(ms): 615259
    CPU(ms): 3666
    Logical Reads: 74729
    Physical Reads: 71370
    Logical Writes: 0
    Any update will be appreciated.
    Bhavesh

  • Azure Backup performance is very slow.

    I am trying to replace an vendor solution with Azure Backup but the performance is so slow, I cant get back to complete. I believe we have plenty of bandwidth to do the backup and we are not going thru a proxy.  For example, I had a backup run for 24
    hours, and the got loss the 1GB backed up and had a message about obengine and port 6049.
    Any help appreciated.

    Hi,
    Thanks for posting here!
    It seems like you are pulling data over an Internet connection, versus accessing the data locally.
    In addition Azure is a service-based platform of shared resources, and this means that two types of latencies or interruptions regularly occur.
    The first is the time taken to make a request and receive a response over the internet. Since those requests and responses can travel through any number of routers before they return to the client, timeouts and disconnections are more frequent than in local,
    fixed networks.
    The second is the time it takes for a shared-resource system like Azure to create backup versions of data for durability and to replace and reroute requests to any removed instances. These latencies and failures are important to understand how to compensate
    to achieve any performance requirements in the application. Load testing at a real-world level will give you more information about the latencies that you see.
    Regards,
    Sadiqh

  • Pl/sql block is too slow, should  procedure a better option

    Hi all,
    how to tune A PL/SQL block that traverse cursors and fetch millions of records then execute inserts in different tables,
    using execute immediate statement.
    It's too slow and takes 10 hours to populate 40 tables having millions of records,
    as i have to do some modifications in data so can not do it by CTAS,
    i.e. a single sql statement.
    Should i make a procedure, does it help .
    Please help or suggest As i am New to PL/Sql
    My code look like,
    declare
    cursor     cur_table1 is
         select field1,field2,field3,field4 from table1;
    begin
    for i in cur_table1
    loop
         execute immediate 'insert into table2 (field1,field2,field3,field4) '||
    'select :1,field2,field3,field4 '||
    ' from table1 where field3= :2'
    using i.field1||'_'||to_char(sysdate,'ddmmyyyy hh12:mi:ss',i.field1;
    commit;
    end if;
    end;
    Thanks and Regards,

    declare
    cursor cur_projects is
         select PROJECTID, PROJECTNAME, DESCRIPTION, DELETED, DELETINGDATE, ACTIVE, ADMINONLY, READONLY, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, CREATOR, CREATED, MODIFIER, MODIFIED from projects ;
    cursor cur_projectversion(p_projectid projects.projectid%TYPE) is
         select PROJECTID, PROJECTVERSIONID, PROJECTVERSIONNAME, DESCRIPTION, DELETED , DELETINGDATE, ACTIVE , ADMINONLY, READONLY, decode(EFFECTIVEDATE,null,trunc(sysdate),EFFECTIVEDATE) EFFECTIVEDATE, EXPIRATIONDATE, SECURITYCLASS, PROJECTCONTACT, DEFAULTVERSION, DEFAULTSTARTPAGE, IMAGEPATH, MAXEXAMINEERRORS, LOCKTIMEOUT, MEMORYSAVINGLEVEL, PRELOADOBJECTS, PUBLICATIONSRCPROJNAME, PUBLICATIONSRCPROJVERNAME, CREATOR, CREATED, MODIFIER, MODIFIED, PROFILELOADERCLASS /*, TRACKCHANGES */
         from projectversions where PROJECTID=p_projectid ;
    cursor cur_objects(p_projectid projects.projectid%TYPE,p_projectversionid projectversions.projectversionid%TYPE) is
         select PROJECTID , PROJECTVERSIONID, OBJECTID , OBJECTKEY , PARENTID, KIND , NAME , TITLE , OWNER , CREATED, MODIFIER , MODIFIED , READY_TO_PUBLISH, LAST_PUBLISHED_DATE , LAST_PUBLISHER , EFFECTIVE_PUBLISHING_DATE , PUBLISHER , PUBLISHING_DATE /*, to_lob(scripttext) */ from OBJECTS where PROJECTID=p_projectid and PROJECTVERSIONID=p_projectversionid /*order by objectid */;
    begin
    for i in cur_projects
    loop
    dbms_output.put_line('PROJECTID => '||i.projectid);
    dbms_output.put_line('_________________________________');
    execute immediate 'insert into &TARGET_USER\.projects(locktimeout, memorysavinglevel , preloadobjects, projectid, projectname, description, deleted, deletingdate, active, adminonly, readonly, securityclass, projectcontact, defaultversion, defaultstartpage, imagepath, maxexamineerrors ) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17) '
    using i.locktimeout, i.memorysavinglevel, i.preloadobjects,i.projectid ,i.projectname , i.description , i.deleted , i.deletingdate , i.active , i.adminonly , i.readonly, i.securityclass, i.projectcontact , i.defaultversion, i.defaultstartpage , i.imagepath, i.maxexamineerrors;
    for k in cur_projectversion(i.projectid)
         loop
    for l in cur_objects(k.projectid,k.projectversionid)
              loop
                   cnt:=cnt+1;
    select count(1) into object_exists from &TARGET_USER\.objects where objectid=l.objectid and projectversionid=1 and projectid=l.projectid;
              if object_exists = 0
              then
              if l.objectid = 1 ------Book Object , objectid = 1 and parentid = 0
              then
              execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                        using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 0 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                   else
                        select count(1) into object_parentid_exists from objects where objectid=l.parentid and projectversionid=1 and projectid=l.projectid;
                        if object_parentid_exists = 0 ---Set Parentid as 1
                        then
                                  cnt_parentid_1:=cnt_parentid_1+1;
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY,PARENTID,NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY, 1 , l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        else
                                  execute immediate 'INSERT INTO &TARGET_USER\.objects(PROJECTID,PROJECTVERSIONID,OBJECTID, OBJECTKEY, PARENTID, NAME, KIND,LAST_PUBLISHED_DATE,LAST_PUBLISHER,REVISIONID,DISPLAYORDER,READONLY,DELETED) values( :1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13)'
                                  using l.PROJECTID, 1, l.OBJECTID,l.OBJECTKEY,l.PARENTID,l.NAME,l.KIND, '' , '' , '', 0, 'N', 'N';
                        end if;
                   end if ;
         end if;
                   execute immediate 'INSERT INTO &TARGET_USER\.objectversions( PROJECTID, OBJECTID, PROJECTVERSIONID ,VERSIONNAME,OBJECTVERSIONID, REVISIONID,DESCRIPTION, TITLE , OWNER, CREATED, MODIFIER, MODIFIED, READY_TO_PUBLISH , LAST_PUBLISHED_DATE, LAST_PUBLISHER, EFFECTIVEDATE, SCRIPTTEXT, REVIEWSTATUS, READONLY, PUBLISHED, DELETED ) '||
                             'SELECT PROJECTID, OBJECTID, 1, owner||:1, PROJECTVERSIONID , '''', '''', TITLE, OWNER, CREATED, MODIFIER, MODIFIED, ''N'', '''' , '''', :2 , to_lob(SCRIPTTEXT), '''', ''N'', ''N'', '''' '||
                             'FROM OBJECTS '||
                             'WHERE PROJECTID= :3 and PROJECTVERSIONID= :4 and OBJECTID= :5'
                             using '_'||TO_CHAR(k.EFFECTIVEDATE,'DDMMYYHHMISS'),k.EFFECTIVEDATE,l.projectid,l.projectversionid,l.objectid;
         end loop;
         dbms_output.put_line(cnt||' OBJECTS, OBJECTVERIONS POPULATED');
         dbms_output.put_line(cnt_parentid_1||' DUMPED UNDER BOOK FOLDER ');
         cnt_parentid_1:=0;
         cnt:=0;
    ............

  • SQL Developer 2.1 working too slow

    Hi All,
    I am working on SQL Developer 2.1 after 3 years previously I used version 1.2. Comparatively so many changes came in 2.1 version but it is too slow and while debugging variable values not shows in Tooltip we need to depend on Smartdata tab. If any settings or patches available for following problem than please provide me.
    1.     Too slow
    2.     Variable values in Tooltip
    By
    Srinivas M. P.

    SQL Developer using 127232 KB, Free 1.32 GB
    Is there any setting is SQL Developer or you talking about my system memory.
    If you talking about my system memory than I am using 2 GB RAM in my system and system working well
    If I do anything in TOAD than TOAD is working fine but SQL Developer working too slow.
    Edited by: SrinivasMP on Feb 5, 2010 3:34 PM

Maybe you are looking for

  • My emails are blocked/rejected by Hotmail

    I just started getting the following from emails sent to a couple of friends that I email often: Your message cannot be delivered to the following recipients:   Recipient address: [email protected]   Reason: Server rejected MAIL FROM address.   Diagn

  • Authorization Object for Storage Type

    Hi Experts, We want to restrict the goods movement based on storage type. Does anybody know what authorization object we can use to implement that? Any help is greatly appreciated. Thanks, Khan

  • Dynamic Actions in Custom Development

    I have a dynamic action running for an action. But when the same action is called through BDC or in the program this doesnot work. Can anyone suggest the reason as to why the dynamic actions are not getting called for the same process in the developm

  • T440S screen suddenly turns blue-grey and I had to force it to power off

    (Updated with Event Viewer logs. Not sure if they are relevant. See the end of the message, right before the picture) This computer is less than 1 year old and has already given me several problems. This is the the most recent one. Basically, the scr

  • How can I Split the String in to different strings

    Hi All, I have created a table A If i select * from A Result is : 1,2,3,4 - Line 1 11,222,3222,422 - Line 2 Now i what to split the data like by using substr and instr A , B , C , D 1 2 3 4 11 222 3222 422 Can any one help me in this query,i need the