Insert Performance is poor

Hi,
In my database there is one table which size is 500MB and on that table there is 5 indexes (2 are composite index).
Through sql loader 15 to 20 batch files are running and those job are inserting into this table. Means there is high insertion on this table. PCTFREE of this table is 10% and PCTUSED is default.
But insertion on this table is taking some time, even 10000 rows are taking more time to insert.
Please help.
Anand

You can improve the performance of SQL*Loader on conventional loads in a number of ways:
*Increase the readsize, I use 20971520 which may be the maximum
*Increase the number of rows per commit to 1000 or even 10000 (default 64)
*Increase the bindsize used to hold the values read from the data file, again I use 20971520
SQL*Loader will use array inserts, so that one INSERT statement will be sent to the database server with many data records in a single round trip, rather than one round trip per data record. This is a big performance boost. Increasing the parameters I have listed will increase the array size, increasing efficiency and reducing the number of separate array inserts issued by SQL*Loader.
Another option to test, is to drop the 5 indexes on the table, load the new data, then recreate the 5 indexes. Without the 5 indexes the load and insert of the new records will happen much faster. And the updating of the indexes could be a cause of contention between multiple, concurrent SQL*Loaders and slowing down the inserts. Depending on how big the table is, it might not take that long to recreate the indexes.
Of course, with triggers spread around your database, you cannot remove the indexes if they are needed by any of the triggers themselves fired by the data being loaded for fast execution. And of course, no other part of the application should be running either.
John

Similar Messages

  • How do I block an ad that opens up a new Firefox window and keeps telling me my system performance is poor? I can't even capture the link.

    This particular ad, supposedly from a Windows Certified Partner, opens in a second Firefox window and keeps flashing the message that my System Performance is Poor, wanting me to click on their ad and let them do whatever nasty things they want to my system. I have not been able to capture the path but next time it comes up, I will write down all that I can see.
    If I have closed my main Firefox screen first, and then close out this ad, the next time I open Firefox, the ad opens and all my standard tabs are gone, requiring me to reopen each one seperately (Facebook, Google, etc).

    Just to address the last point: check your History menu to see whether you have the Restore Previous Session option and use that if you can. If that is grayed out or doesn't restore everything, check the Recently Closed Windows and Recently Closed Tabs lists for other pages.
    The unwanted window may be generated by an add-on. Try disabling ALL nonessential or unrecognized extensions on the Add-ons page. Either:
    * Ctrl+Shift+a
    * "3-bar" menu button (or Tools menu) > Add-ons
    In the left column, click Extensions. Then, if in doubt, disable.
    Usually a link will appear above at least one disabled extension to restart Firefox. You can complete your work on the tab and click one of the links as the last step.
    Any improvement?
    Here are some other things to check:
    (1) user.js file that changes Firefox startup behavior and overrides your preferences. This article describes how to track that down and delete it if you have one: [[How to fix preferences that won't save]].
    (2) Possible hijacked shortcut. Check the "target" of the desktop icon you use to start Firefox to see whether it lists the unwanted page. To do that:
    right-click the icon > Properties > Shortcut tab
    For 64-bit Windows 7, the Target should be no more and no less than this:
    "C:\Program Files (x86)\Mozilla Firefox\firefox.exe"
    (3) Possible undisclosed bundle items. If you have installed any free software recently, check your Windows Control Panel, Uninstall a program for surprises. If you click the "Installed on" column head to group by date, it is easier to spot bundled junk. Remove everything suspicious or unrecognized.
    (4) Supplemental clean up scans. Our support article lists tools other Firefox users have found helpful: [[Troubleshoot Firefox issues caused by malware]].
    Hopefully that cures it.

  • Bad INSERT performance when using GUIDs for indexes

    Hi,
    we use Ora 9.2.0.6 db on Win XP Pro. The application (DOT.NET v1.1) is using ODP.NET. All PKs of the tables are GUIDs represented in Oracle as RAW(16) columns.
    When testing with mass data we see more and more a problem with bad INSERT performance on some tables that contain many rows (~10M). Those tables have an RAW(16) PK and an additional non-unique index which is also set on a RAW(16) column (both are standard B*tree). An PerfStat reports tells that there is much activity on the Index tablespace.
    When I analyze the related table and its indexes I see a very very high clustering factor.
    Is there a way how to improve the insert performance in that case? Use another type of index? Generally avoid indexed RAW columns?
    Please help.
    Daniel

    Hi
    After my last tests I conclude at the followings:
    The query returns 1-30 records
    Test 1: Using Form Builder
    -     Execution time 7-8 seconds
    Test 2: Using Jdeveloper/Toplink/EJB 3.0/ADF and Oracle AS 10.1.3.0
    -     Execution time 25-27 seconds
    Test 3: Using JDBC/ADF and Oracle AS 10.1.3.0
    - Execution time 17-18 seconds
    When I use:
    session.setLogLevel(SessionLog.FINE) and
    session.setProfiler(new PerformanceProfiler())
    I don’t see any improvement in the execution time of the query.
    Thank you
    Thanos

  • Oltp insert performance

    Hi Experts ,
    1. could someone guide me on understanding what are things that impact insert performance in an oltp application with ~25 concurrent sessions doing 20 inserts/session  into  table X. ? (env- oracle 11g ,3 node RAC , ASSM tablespace , tables X is range partitioned )
    2. If any storage parameter is not property set then how to identify which one needs to be fixed?
    Note: current insert performance is : 0.02 sec/insert.

    Hi Garry,
    Thanks for your response.
    some more info regarding app : DB version  11.2.0.3 . Below is the awr info during peak load for 1 hr snap. any suggestions are helpful.
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:    18,624M    18,624M  Std Block Size:         8K
               Shared Pool Size:     3,200M     3,200M      Log Buffer:    25,888K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                     4.9                0.0                 0.01       0.00    
           DB CPU(s):                0.5                     0.0                 0.00       0.00         
           Redo size:               585,778.7            2,339.6
       Logical reads:                24,046.6               96.0
       Block changes:            2,374.5                9.5
      Physical reads:            1,101.6                4.4
    Physical writes:              394.6                1.6
          User calls:                 2,086.6                8.3
              Parses:                9.5                     0.0
         Hard parses:                     0.5                     0.0
    W/A MB processed:                5.8                0.0
              Logons:                     0.6                     0.0
            Executes:                   877.7                     3.5
           Rollbacks:                   218.6                     0.9
        Transactions:              250.4
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.99       Redo NoWait %:   99.99
                Buffer  Hit   %:   95.44    In-memory Sort %:  100.00
                Library Hit   %:   99.81        Soft Parse %:   95.16
             Execute to Parse %:   98.92         Latch Hit %:   99.89
    Parse CPU to Parse Elapsd %:   92.50     % Non-Parse CPU:   97.31
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.36   74.73
        % SQL with executions>1:   90.63   90.41
      % Memory for SQL w/exec>1:   83.10   85.49
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Event                                 Waits               Time(s)   Avg(ms)     %DBtime       Wait Class    
    db file sequential read           3,686,200      15,658      4                  87.7            User I/O              
    DB CPU                                                      1,802                         10.1                             
    db file parallel read                19,646              189         10                 1.1            User I/O    
    gc current grant 2-way              842,079         145          0               .8                 Cluster
    gc current block 2-way              425,663         106           0               .6            Cluster

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • XMLTYPE insert performance

    I am experiencing performance problems when inserting a 30 MB XML file into an XMLTYPE field - under Oracle 11 with the schema I am using the minimum time I can achieve is around 9 minutes which is too long... can anyone comment on whether this performance is normal and possibly suggest how it could be improved while retaining the benefits of structured storage...thanks in advance for the help :)

    sorry for the late reply - I didn't notice that you had replied to my earlier post...
    To answer your questions in order:
    - I am using "structured" storage because I read ( in this article: [http://www.oracle.com/technology/pub/articles/jain-xmldb.html] ) that this would result in higher xquery performance.
    - the schema isn't very large but it is complex. ( as discussed in above article )
    I built my table by first registering the schema and then adding the xml elements to the table such that they would be stored in structured storage. i.e.
    --// Register schema /////////////////////////////////////////////////////////////
    begin
    dbms_xmlschema.registerSchema(
    schemaurl=>'fof_fob.xsd',
    schemadoc=>bfilename('XFOF_DIR','fof_fob.xsd'),
    local=>TRUE,
    gentypes=>TRUE,
    genbean=>FALSE,
    force=>FALSE,
    owner=>'FOF',
    csid=>nls_charset_id('AL32UTF8')
    end;
    COMMIT;
    and then created the table using ...
    --// Create the XCOMP table /////////////////////////////////////////////////////////////
    create table "XCOMP" (
         "type" varchar(128) not null,
         "id" int not null,
         "idstr1" varchar(50),
         "idstr2" varchar(50),
         "name" varchar(255),
         "rev" varchar(20) not null,
         "tstamp" varchar(30) not null,
         "xmlfob" xmltype)
    XMLTYPE "xmlfob" STORE AS OBJECT RELATIONAL
    XMLSCHEMA "fof_fob.xsd"
    ELEMENT "FOB";
    No indexing was specified for this table. Then I inserted the offending 30 MB xml file using (in c#, using ODP.NET under .NET 3.5):
    void test(string myName, XElement myXmlElem)
    OracleConnection connection = new OracleConnection();
    connection.Open();
    string statement = "INSERT INTO XCOMP ( \"name\", \"xmlfob\"") values( :1, :2 )";
    XDocument xDoc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"), myXmlElem);
    OracleCommand insCmd = new OracleCommand(statement, connection);
    OracleXmlType xmlinfo = new OracleXmlType(connection, xDoc.CreateReader());
    insCmd.Parameters.Add(FofDbCmdInsert.Name, OracleDbType.Varchar2, 255);
    insCmd.Parameters.Add(FofDbCmdInsert.Xmldoc, OracleDbType.XmlType);
    insCmd.Parameters[0].Value = myName;
    insCmd.Parameters[1].Value = xmlinfo;
    insCmd.ExecuteNonQuery();
    connection.Close();
    It took around 9 minutes to execute the ExecuteNonQuery statement, usingOracle 11 standard edition running under Windows 2008-64 with 8 GB RAM and 2.5 MHZ single core ( of a quad-core running under VMWARE )
    I would much appreciate any suggestions that could speed up the insert performance here - as a temporary solution I chopped some of the information out of the XML document and store it seperately in another table, but this approach has the disadvantage that I using xqueries is a bit inflexible, although the performance is now in seconds rather than minutes...
    I can't see any reason why Oracle's shredding mechanism should be less efficient than manual shredding the information.
    Thanks in advance for any helpful hints you can provide!

  • After updated iOS8, IPhone 4S performance gone poor

    Hi There,
    After updated iOS8 on IPhone 4S, phone performance gone poor.  When this problem will be resolved?

    Hi..
    Have you tried a reset yet?
    Press and hold the Sleep/Wake button and the Home button together for at least ten seconds, until the Apple logo appears.
    If that doesn't help, tap Settings > General > Reset > Reset All Settings
    No data is lost due to a reset.

  • Poor insert performance

    I am currently facing an issue where an insert into a table is taking ~2hrs to insert 1 M rows. It appears that its due to a foriegn key constraint. is it normal that a ref constraint can slow down things 2X times? because its 2x times faster when i disable that constraint. what could be wrong with it?

    yes.
    here is the content from trace o/p
    Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      enq: FB - contention                         3951        0.00          1.63
      Disk file operations I/O                      143        0.00          0.00
      gc current grant 2-way                      39110        0.01         10.94
    db file sequential read                    766473        0.56       3834.87
      gc current grant busy                        1917        0.00          0.81
      gc current block 2-way                        782        0.00          0.34
      KJC: Wait for msg sends to complete           185        0.00          0.00
      gc current multi block request               1835        0.00          1.22
      gc current grant congested                    233        0.00          0.06
      enq: SK - contention                            1        0.00          0.00
      gc cr multi block request                     290        0.00          0.26
      row cache lock                                 55        0.00          0.01
      db file scattered read                        259        0.02          1.97
      gc current block congested                      4        0.00          0.00
      latch: gc element                               1        0.00          0.00
      log file switch completion                      2        0.03          0.05
      gc buffer busy release                          1        0.00          0.00
      enq: TT - contention                            1        0.00          0.00
      log file sync                                   1        0.00          0.00
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1       85.93         85.93

  • Macbook pro performing very poor due to Yosemite

    Hi,
    I bought macbook pro 2012 model just 10 days back. Unfortunately i got yosemite preinstalled. My macbook pro is performing very very poor now .. Please help me in this regard
    Problem description:
    Very slow performance
    EtreCheck version: 2.1.5 (108)
    Report generated 12 January 2015 8:06:38 pm IST
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
      MacBook Pro (13-inch, Mid 2012) (Verified)
      MacBook Pro - model: MacBookPro9,2
      1 2.5 GHz Intel Core i5 CPU: 2-core
      4 GB RAM Upgradeable
      BANK 0/DIMM0
      2 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      2 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000
      Color LCD 1280 x 800
    System Software: ℹ️
      OS X 10.10.1 (14B25) - Uptime: 3 days 1:6:20
    Disk Information: ℹ️
      APPLE HDD HTS545050A7E362 disk0 : (500.11 GB)
      EFI (disk0s1) <not mounted> : 210 MB
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
      Macintosh HD (disk1) / : 498.89 GB (428.37 GB free)
      Core Storage: disk0s2 499.25 GB Online
      HL-DT-ST DVDRW  GS41N 
    USB Information: ℹ️
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Inc. Apple Internal Keyboard / Trackpad
      Apple Computer, Inc. IR Receiver
      Apple Inc. FaceTime HD Camera (Built-in)
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Configuration files: ℹ️
      /etc/hosts - Count: 6
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Kernel Extensions: ℹ️
      /Library/Extensions
      [loaded] com.driver.LogJoystick (2.0 - SDK 10.9) [Support]
      [loaded] com.logitech.driver.LogiGamingMouseFilter (1 - SDK 10.9) [Support]
    Startup Items: ℹ️
      MySQLCOM: Path: /Library/StartupItems/MySQLCOM
      Startup items are obsolete in OS X Yosemite
    Problem System Launch Agents: ℹ️
      [killed] com.apple.bird.plist
      [killed] com.apple.CallHistoryPluginHelper.plist
      [killed] com.apple.CallHistorySyncHelper.plist
      [killed] com.apple.cloudd.plist
      [killed] com.apple.coreservices.appleid.authentication.plist
      [killed] com.apple.EscrowSecurityAlert.plist
      [killed] com.apple.icloud.fmfd.plist
      [killed] com.apple.nsurlsessiond.plist
      [killed] com.apple.pluginkit.pkd.plist
      [killed] com.apple.rcd.plist
      [killed] com.apple.recentsd.plist
      [killed] com.apple.sbd.plist
      [killed] com.apple.scopedbookmarkagent.xpc.plist
      [killed] com.apple.security.cloudkeychainproxy.plist
      [killed] com.apple.spindump_agent.plist
      15 processes killed due to memory pressure
    Problem System Launch Daemons: ℹ️
      [killed] com.apple.AssetCacheLocatorService.plist
      [killed] com.apple.awdd.plist
      [killed] com.apple.corestorage.corestoragehelperd.plist
      [killed] com.apple.ctkd.plist
      [killed] com.apple.diagnosticd.plist
      [killed] com.apple.emond.aslmanager.plist
      [killed] com.apple.icloud.findmydeviced.plist
      [killed] com.apple.ifdreader.plist
      [killed] com.apple.nehelper.plist
      [killed] com.apple.nsurlsessiond.plist
      [killed] com.apple.periodic-daily.plist
      [killed] com.apple.periodic-weekly.plist
      [killed] com.apple.softwareupdate_download_service.plist
      [killed] com.apple.spindump.plist
      [killed] com.apple.systemstats.analysis.plist
      [killed] com.apple.wdhelper.plist
      16 processes killed due to memory pressure
    Launch Agents: ℹ️
      [not loaded] com.adobe.AAM.Updater-1.0.plist [Support]
      [not loaded] com.teamviewer.teamviewer.plist [Support]
      [not loaded] com.teamviewer.teamviewer_desktop.plist [Support]
    Launch Daemons: ℹ️
      [loaded] com.adobe.fpsaud.plist [Support]
      [loaded] com.teamviewer.Helper.plist [Support]
      [not loaded] com.teamviewer.teamviewer_service.plist [Support]
    User Launch Agents: ℹ️
      [loaded] com.adobe.AAM.Updater-1.0.plist [Support]
      [loaded] com.google.keystone.agent.plist [Support]
      [loaded] com.valvesoftware.steamclean.plist [Support]
    User Login Items: ℹ️
      Android File Transfer Agent Application (/Users/[redacted]/Library/Application Support/Google/Android File Transfer/Android File Transfer Agent.app)
    Internet Plug-ins: ℹ️
      FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
      Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
      QuickTime Plugin: Version: 7.7.3
      AdobeAAMDetect: Version: AdobeAAMDetect 1.0.0.0 - SDK 10.6 [Support]
      Default Browser: Version: 600 - SDK 10.10
    Safari Extensions: ℹ️
      Searchme [Cached] Adware! [Remove]
    3rd Party Preference Panes: ℹ️
      Flash Player  [Support]
      MySQL  [Support]
    Time Machine: ℹ️
      Time Machine not configured!
    Top Processes by CPU: ℹ️
          5% WindowServer
          0% fontd
          0% smcFanControl
          0% AppleSpell
          0% Google Chrome
    Top Processes by Memory: ℹ️
      408 MB com.apple.SpeechRecognitionCore.speechrecognitiond
      133 MB Google Chrome
      120 MB Skype
      86 MB loginwindow
      77 MB Pages
    Virtual Memory Information: ℹ️
      51 MB Free RAM
      1.30 GB Active RAM
      1.22 GB Inactive RAM
      773 MB Wired RAM
      45.09 GB Page-ins
      646 MB Page-outs
    Diagnostics Information: ℹ️
      Jan 12, 2015, 01:11:20 PM /Users/[redacted]/Library/Logs/DiagnosticReports/httpd_2015-01-12-131120_[redac ted].crash
      Jan 12, 2015, 01:06:00 PM /Users/[redacted]/Library/Logs/DiagnosticReports/httpd_2015-01-12-130600_[redac ted].crash
      Jan 11, 2015, 11:11:16 PM /Users/[redacted]/Library/Logs/DiagnosticReports/httpd_2015-01-11-231116_[redac ted].crash
      Jan 11, 2015, 11:09:03 PM /Users/[redacted]/Library/Logs/DiagnosticReports/httpd_2015-01-11-230903_[redac ted].crash

    Before buying a second-hand computer, you should have run Apple Diagnostics or the Apple Hardware Test, whichever is applicable.
    The first thing to do after buying the computer is to erase the internal drive and install a clean copy of OS X. You—not the original owner—must do that. Changes made by Apple over the years have made this seemingly straightforward task very complex.
    How you go about it depends on the model, and on whether you already own another Mac. If you're not sure of the model, enter the serial number on this page. Then find the model on this page to see what OS version was originally installed.
    It's unsafe, and may be unlawful, to use a computer with software installed by a previous owner.
    1. If you don't own another Mac
    a. If the machine shipped with OS X 10.4 or 10.5, you need a boxed and shrink-wrapped retail Snow Leopard (OS X 10.6) installation disc from the Apple Store or a reputable reseller—not from eBay or anything of the kind. If the machine is very old and has less than 1 GB of memory, you'll need to add more in order to install 10.6. Preferably, install as much memory as it can take, according to the technical specifications.
    b. If the machine shipped with OS X 10.6, you need the installation media that came with it: gray installation discs, or a USB flash drive for a MacBook Air. You should have received the media from the original owner, but if you didn't, order replacements from Apple. A retail disc, or the gray discs from another model, will not work.
    To start up from an optical disc or a flash drive, insert it, then restart the computer and hold down the C key at the startup chime. Release the key when you see the gray Apple logo on the screen.
    c. If the machine shipped with OS X 10.7 or later, you don't need media. It should start up in Internet Recovery mode when you hold down the key combination option-command-R at the startup chime. Release the keys when you see a spinning globe.
    d. Some 2010-2011 models shipped with OS X 10.6 and received a firmware update after 10.7 was released, enabling them to use Internet Recovery. If you have one of those models, you can't reinstall 10.6 even from the original media, and Internet Recovery will not work either without the original owner's Apple ID. In that case, contact Apple Support, or take the machine to an Apple Store or another authorized service provider to have the OS installed.
    2. If you do own another Mac
    If you already own another Mac that was upgraded in the App Store to the version of OS X that you want to install, and if the new Mac is compatible with it, then you can install it. Use Recovery Disk Assistant to prepare a USB device, then start up the new Mac from it by holding down the C key at the startup chime. Alternatively, if you have a Time Machine backup of OS X 10.7.3 or later on an external hard drive (not a Time Capsule or other network device), you can start from that by holding down the option key and selecting it from the row of icons that appears. Note that if your other Mac was never upgraded in the App Store, you can't use this method.
    3. Partition and install OS X
    a. If you see a lock screen when trying to start up from installation media or in Recovery mode, then a firmware password was set by the previous owner, or the machine was remotely locked via iCloud. You'll either have to contact the owner or take the machine to an Apple Store or another service provider to be unlocked. You may be asked for proof of ownership.
    b. Launch Disk Utility and select the icon of the internal drive—not any of the volume icons nested beneath it. In the  Partition tab, select the default options: a GUID partition table with one data volume in Mac OS Extended (Journaled) format. This operation will permanently remove all existing data on the drive.
    c. An unusual problem may arise if all the following conditions apply:
              OS X 10.7 or later was installed by the previous owner
              The startup volume was encrypted with FileVault
              You're booted in Recovery mode (that is, not from a 10.6 installation disc)
    In that case, you won't be able to unlock the volume or partition the drive without the FileVault password. Ask for guidance or see this discussion.
    d. After partitioning, quit Disk Utility and run the OS X Installer. If you're installing a version of OS X acquired from the App Store, you will need the Apple ID and password that you used. When the installation is done, the system will automatically restart into the Setup Assistant, which will prompt you to transfer the data from another Mac, its backups, or from a Windows computer. If you have any data to transfer, this is usually the best time to do it.
    e. Run Software Update and install all available system updates from Apple. To upgrade to a major version of OS X newer than 10.6, get it from the Mac App Store. Note that you can't keep an upgraded version that was installed by the original owner. He or she can't legally transfer it to you, and without the Apple ID you won't be able to update it in Software Update or reinstall, if that becomes necessary. The same goes for any App Store products that the previous owner installed—you have to repurchase them.
    4. Other issues
    a. If the original owner "accepted" the bundled iLife applications (iPhoto, iMovie, and Garage Band) in the App Store so that he or she could update them, then they're irrevocably linked to that Apple ID and you won't be able to download them without buying them. Reportedly, Mac App Store Customer Service has sometimes issued redemption codes for these apps to second owners who asked.
    b. If the previous owner didn't deauthorize the computer in the iTunes Store under his Apple ID, you wont be able to  authorize it immediately under your ID. In that case, you'll either have to wait up to 90 days or contact iTunes Support.
    c. When trying to create a new iCloud account, you might get a failure message: "Account limit reached." Apple imposes a lifetime limit of three iCloud account setups per device. Erasing the device does not reset the limit. You can still use an iCloud account that was created on another device, but you won't be able to create a new one. Contact iCloud Support for more information. The setup limit doesn't apply to Apple ID accounts used for other services, such as the iTunes and Mac App Stores, or iMessage. You can create as many of those accounts as you like.

  • Suggestions to improve the INSERT performance

    Hi All,
    I have a table which has 170 columns .
    I am inserting huge data 50K and more records into this table.
    my insert would be look like this.
    INSERT INTO /*+ append */ REPORT_DATA(COL1,COL2,COL3,COL4,COL5,COL6)
    SELECT  DATA1,DATA2,DATA3,DATA4,DATA5,DATA5 FROM TXN_DETAILS
    WHERE COL1='CA';
    Here i want to insert values for only few columns.Hence i specifies only those column names in insert statement.
    But when huge data(50k+) returned by select query then this statement taking   very long time to execute(approximately 10 to 15 mins).
    Please  suggest me to improve this insert statement performance.I am also using 'append' hint.
    Thanks in advance.

    a - Disable/drop indexes and constraints - It's far faster to rebuild indexes after the data load, all at-once. Also indexes will rebuild cleaner, and with less I/O if they reside in a tablespace with a large block size.
    b - Manage segment header contention for parallel inserts - Make sure to define multiple freelist (or freelist groups) to remove contention for the table header. Multiple freelists add additional segment header blocks, removing the bottleneck.  You can also use Automatic Segment Space Managementhttp://www.dba-oracle.com/art_dbazine_ts_mgt.htm (bitmap freelists) to support parallel DML, but ASSM has some limitations
    c - Parallelize the load - You can invoke parallel DML (i.e. using the PARALLEL and APPEND hint) to have multiple inserts into the same table. For this INSERT optimization, make sure to define multiple freelists and use the SQL "APPEND" option. If you submit parallel jobs to insert against the table at the same time, using the APPEND hint may cause serialization, removing the benefit of parallel jobstreams.
    d - APPEND into tables - By using the APPEND hint, you ensure that Oracle always grabs "fresh" data blocks by raising the high-water-mark for the table. If you are doing parallel insert DML, the Append mode is the default and you don't need to specify an APPEND hint. Also, if you're going w/ APPEND, consider putting the table into NOLOGGING mode, which will allow Oracle to avoid almost all redo logging."
    insert /*+ append */ into customer values ('hello',';there');
    e - Use a large blocksize - By defining large (i.e. 32k) blocksizes for the target table, you reduce I/O because more rows fit onto a block before a "block full" condition (as set by PCTFREE) unlinks the block from the freelist.
    f - Use  NOLOGGING
    f - RAM disk - You can use high-speed solid state disk (RAM-SAN) to make Oracle inserts run up to 300x faster than platter disk.

  • Truncate Table before Insert--Performance

    HI All,
    This post is in focus of special requirement where a table is truncated before inserting records in the table.
    Now, when a table is truncated the High Water Mark(HWK) is reset to lowest memory allocated for table in tablespace. After this, would insert with append can boost the performance of the insert query?
    In simple insert query, the oracle engine consults the free list to look for free spaces.
    But in insert with apppend, the engine starts above the HWM. And the argument is when truncate has been executes on table, would the freelist be used in simple insert.
    I just need to know if there are any benefits of using append insert on truncated table or simple insert would be same in term of performance with respect to insert with append.
    Regards
    Nits

    Hi,
    if you don't need the data truncate the table. There is no negativ impact whether you are using an conventional path or a direct path insert.
    If you use append less redo is written for the table if the table is in NOLOGGING mode, but redo is written for all indexes. I would recommand to create a full backup after that (if needed), because your table will not be recoverable after that (no REDO Information).
    Dim

  • Can insert performance be improved playing with env parameters?

    Below is the environment confioguration and results of my bulk load insert experiments. The results are from two scenarios that is also described below. The values for the two scenarios is separated by a space.
    Environment Configuration:
    setTxn     N
    DeferredWrite Y     
    Sec Bulk Load     Y
    Post Build SecIndex Y
    Sync Y
    Column1 value reflects for the scenario:
    Two databases
    a. Database with 2,500,000 records
    b. Database with 2,500,000 records
    Column2 value reflects for the scenario:
    Two databases
    a. Database with 25,000,000 records
    b. Database with 25,000,000 records
    1. Is there a good documentation which describes what the environment statistics mean.
    2. Looking at the statistics below, can you make any suggestions for performance improvement.
    Looking at the below statistics is the:
    Eviction Stats                    
    nEvictPasses               3929          146066
    nNodesSelected               309219          17351997
    nNodesScanned          3150809     176816544
    nNodesExplicitlyEvicted     152897     8723271
    nBINsStripped          156322     8628726
    requiredEvictBytes     524323     530566
    CheckPoint Stats     
    nCheckpoints     55     1448
    lastCheckpointID     55     1448
    nFullINFlush     54     1024
    nFullBINFlush     26     494
    nDeltaINFlush     116     2661
    lastCheckpointStart     0x6f/0x2334f8     0xb6a/0x82fd83
    lastCheckpointEnd     0x6f/0x33c2d6     0xb6a/0x8c4a6b
    endOfLog     0xb/0x6f22e     0x6f/0x75a843     0xb6a/0x23d8f
    Cache Stats     
    nNotResident     4591918     57477898
    nCacheMiss     4583077     57469807
    nLogBuffers     3     3
    bufferBytes     3145728     3145728
    (MB)     3.00     3.00
    cacheDataBytes     563450470     370211966
    (MB)     537.35     353.06
    adminBytes     29880     16346272
    lockBytes     1113     1113
    cacheTotalBytes     566596198     373357694
    (MB)     540.35     356.06
    Logging Stats          
    nFSyncs 59     1452
    nFSyncRequest     59     1452
    nFSyncTimeouts     0     0
    nRepeatFaultReads     31513     6525958
    nTempBufferForWrite     0     0
    nRepeatIteratorReads     0     0
    totalLogSize     1117658932     29226945317
    (MB)     1065.88     27872.99
    lockBytes     1113     1113

    Hello Linda,
    I am inserting 25,000,000 records of the type:
    Database 1
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    The secondary keys are on {long,long} and {String}
    Database 2
    Key --> Data
    [long,Integer,long] --> [{long,long}, {Integer}}
    The secondary keys are on {long,long} and {Integer}
    i set the env parameters to non-transactional and setDeferredWrite(True)
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
    private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
        try {
              SecondaryIndex<TDetailSecondaryKey, TDetailStringKey,
                                       TDetailStringRecord> secondaryIndex      = 
                                       store.getSecondaryIndex(
                                             dataAccessLayer.getPrimaryIndex() ,
                                             TDetailSecondaryKey.class,         
                                             SECONDARY_KEY_NAME
            } catch (DatabaseException e) {
                  throw new RuntimeException(e);
    We are inserting to 2 databases  as mentioned above.
    NumRecs        250,000x2   2,500,000x2     25,000,000x2
    TotalTime(ms)  16877             673623     30225781
    PutTime(ms)    7684             76636   1065030
    BuildSec(ms)   4952             590207     29125773
    Sync(ms)       4241             6780     34978Why does building secondaryIndex ( 2 secondary databases in this case) take so much longer than inserting to the primary database - 27 times longer !!!
    Its hard to believe that building of the tree for secondary database takes so much longer.
    Why doesnt building the tree for primary database take so long. The data in the primary database is same as its key to be able to search on these values.
    Hence its surprising it takes so long
    The cache stats mentioned above relate to these .
    Can you try explaining this. We are trying to figure out is it worth trying to build the secondary index later for bulk loading.

  • Improve Database adapter insert performance

    Hopefully this is an easy question to answer. I'm getting passed to my BPEL over 8,000 records and I need to take those records and then insert them into an Oracle database. I've been trying to tune the insert by using properties like inMemoryOptimization, but the load still takes severl hours. Any suggestions on how to get the Database adapter to perform better or load all 8,000 records at once? thanks in advance.

    Hello.
    8000 records doesn't sound "huge", unless a record is say 1 kB then you have 8 MB, which is a large payload to move around in one piece.
    A DB merge is typically slower than an insert, though you did say you were using an insert.
    If you are inserting each row one at a time that seems like it would be pretty slow.
    Normally the input to a DB adapter insert is a collection (of rows) vs. a single row. If you have been handed 8000 individual rows you can assemble them into a collection with an iteration - tedious in BPEL but works fine.
    Daren

  • Oracle 10g Merge Insert performance

    Hi All,
    Performance wise, is it better to use a regular insert statement or Merge (insert only) statement ... in Oracle10g. (no updates are used in this merge statement).
    Thanks for the input.

    thanks for the comment ... here is the more info for INSERT alone using Merge ... thought Oracle has a reason for this to add in 10g.
    http://www.oracle-developer.net/display.php?id=310
    I am looking for right answer about the performance

Maybe you are looking for

  • Problems with instant play in some browsers

    This is not a problem with how Safari works, but I thought someone here might help me with a frustrating problem for which I can find no solution. I created several web pages with embedded QuickTime movies in mp4 format.  They are constructed so that

  • 4K resolution

    Hi, I'm about to buy a 4K monitor. I need to know if all of Adobe CC programs support 4K resolution.

  • Urgent Airport Update Needed!

    *Dear Apple;* I manage a coffee shop, probably the ONLY internet cafe in the city equipped with only Mac stations.  Well, in effort to go "Mac only", I got your little Airport toy and sure, it's got some fun features, but it's missing basic features

  • UI font name in Windows? I disabled mine.

    I need to know the name of the UI font for Photoshop CS2. Upon startup I keep receiving the "UI font not found" error. I have over 1000 ttf and ps fonts and occasionally disable some of them; to keep that piece of Windows XP stuff working corectly. M

  • Lost photos as iphoto crashed- how do I get them back?

    As I was downloading my photos from my exilim camera, iphoto crashed and I lost my photos. I pressed the download and delete from camera button as I usually do- so there aren't any files left on the camera. Can I open one of those files in the pictur