Package - performance-wise is it correct?

Hi All
I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?
Thanks in advance
regards
anna
procedure populate_table1
begin
for my_cursor_emp in crs_emp
loop
insert into employees
(emp_no
,first_name
,last_name
values
my_cursor_emp.emp_no
,my_cursor_emp.first_name
,my_cursor_emp.lastt_name
end loop;
end populate_table1
Lot more columns are there in the above procedure. Package continues as
procedure 2
procedure 3
...

Annas wrote:
Hi All
I have created a package which runs as a concurrent programme to populate 9 tables. The package includes a separate procedures to populate each of the tables as below. I would like to know whether the below method is recommended performance-wise or is there are any other better approach to achieve this?The recommended approach would be to get rid of the cursor loops.
INSERT INTO source_table
select <columns>
from YOUR_QUERY;This assumes you actually NEED to populate 9 tables like you say, i find that suspect in and of itself. Can you explain the end goal here? Are you populating temporary tables, doing a data migration, something else?

Similar Messages

  • We have many mappings, which one is good in performance wise ?

    We have many mappings, which one is good in performance wise ?

    HI
    Different Mapping Techniques are available in XI. They are: Message Mapping, XSLT Mapping, Java Mapping and ABAP mapping.
    u2022The integration repository includes a graphical mapping editor. It includes built-in functions for value transformations and queue and context handling.  There is an interface for writing user-defined functions (java) as well.
    u2022XSLT mappings can be imported into the Integration Repository; java methods can be called from within the XSLT style sheet. Advantages of this mapping are: open standard, portable, extensible via Java user-defined functions.
    u2022If the transformation is very complex, it may be easiest to leverage the power of Java for mapping.
    u2022ABAP mapping programs can also be written to transform the message structures.
    Message Mapping
    SAP XI provides a graphical mapping tool that generates a java mapping program to be called at run time.
    u2022Graphically define mapping rules between source and target message types.
    u2022Queue-based model allows for handling of extremely large documents.
    u2022Drag-and-drop.
    u2022Generates internal Java Code.
    u2022Built-in and user-defined functions (in Java)
    u2022Integrated testing tool.
    u2022N:M mapping is possible.
    JAVA MAPPING:
    Usually Java mapping is preferred when the target structure is relatively complex and the transformation cannot be accomplished by simple graphical mapping.
    For e.g. consider a simple File->IDoc scenarion where the source file is a simple XML file, whereas the target file is an IDoc with more than one hierarchy level e.g FINSTA01. Content conversion in XI can only create a single level hierarchy, so in this scenario a Java mapping would come in handy.
    See these:
    http://help.sap.com/saphelp_nw04/helpdata/en/e2/e13fcd80fe47768df001a558ed10b6/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10dd67dd-a42b-2a10-2785-91c40ee56c0b
    /people/prasad.ulagappan2/blog/2005/06/29/java-mapping-part-i
    /people/thorsten.nordholmsbirk/blog/2006/08/10/using-jaxp-to-both-parse-and-emit-xml-in-xi-java-mapping-programs
    When to use Java mapping
    1) Java mapping are used when graphical mapping cannot help you.
    Advantages of Java Mapping
    1)you can use Java APIs and Classes in it.
    2) file look up or a DB lookup is possible
    3) DOM is easier to use with lots of classes to help you create nodes and elements.
    Java mapping can be used when you have complex mapping structures.
    ABAP MAPPING:
    ABAP mappings are mapping programs in ABAP objects that customers can implement using the ABAP Workbench.
    An ABAP mapping comprises an ABAP class that implements the interface IF_MAPPING in the package SAI_MAPPING. The interface has a method EXECUTE with the some signature.
    Applications can decide themselves in the method EXECUTE how to import and change the source XML document. If you want to use the XSLT processor of SAP Web AS, you can use the ABAP Workbench to develop a stylesheet directly rather than using ABAP mappings.
    In ABAP mapping you can read access message header fields. To do this, an object of type IF_MAPPING_PARAM is transferred to the EXECUTE method. The interface has constants for the names of the available parameters and a method GET, which returns the respective value for the parameter name. The constants are the same as in Java mappings, although the constant MAPPING_TRACE does not exist for ABAP mappings. Instead, the trace object is transferred directly using the parameter TRACE of the method IF_MAPPING~EXECUTE.
    For more details refer
    http://help.sap.com/saphelp_nw70/helpdata/EN/ba/e18b1a0fc14f1faf884ae50cece51b/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5c46ab90-0201-0010-42bd-9d0302591383
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e3ead790-0201-0010-64bb-9e4d67a466b4
    /people/sameer.shadab/blog/2005/09/29/testing-abap-mapping
    ABAP Mapping
    /people/udo.martens/blog/2006/08/23/comparing-performance-of-mapping-programs
    https://websmp101.sap-ag.de/~sapdownload/011000358700003082332004E/HowToABAPMapping.pdf
    /people/ravikumar.allampallam/blog/2005/02/10/different-types-of-mapping-in-xi
    /people/r.eijpe/blog
    ABAP Mapping Vs Java Mapping.
    Re: Message Mapping of type ABAP Class not being shown
    Re: Performance of mappings (JAVA, XSLT, ABAP)
    XSLT Mapping
    XSLT stands for EXtensible Stylesheet Language Transformations. It is an XML based language for transforming XML documents into any other formats suitable for browser to display, on the basis of set of well-defined rules.
    /people/sap.user72/blog/2005/03/15/using-xslt-mapping-in-a-ccbpm-scenario
    /people/anish.abraham2/blog/2005/12/22/file-to-multiple-idocs-xslt-mapping
    The above menotined are the mapping present in the XI.
    When it is critical and complicate we go for the ABAP,JAVA or XSLt mapping. For simple Mapping we go for the graphical mapping.
    the selection of mapping also depends upon the requirement and alos on our scenario.
    cheers

  • Performance wise which is best extends Thread Class or implement Runnable

    Hi,
    Which one is best performance wise extends Thread Class or implement Runnable interface ?
    Which are the major difference between them and which one is best in which case.

    Which one is best performance wise extends Thread Class or implement Runnable interface ?Which kind of performance? Do you worry about thread creation time, or about execution time?
    If the latter, then don't : there is no effect on the code being executed.
    If the former (thread creation), then browse the API Javadoc about Executor and ExecutorService , and the other execution-related classes in the same package, to know about the usage of the various threading/execution models.
    If you worry about, more generally, throughput (which would be a better concern), then it is not impacted by whether you have implemented your code in a Runnable implementation class, or a Thread subclass.
    Which are the major difference between them and which one is best in which case.Runnable is almost always better design-wise :
    - it will eventually be executed in a thread, but it leaves you the flexibility to choose which thread (the current one, another thread, another from a pool,...). In particular you should read about Executor and ExecutorService as mentioned above. In particular, if you happen to actually have a performance problem, you can change the thread creation code with little impact on the code being executed in the threads.
    - it is an interface, and leaves you free to extend another class. Especially useful for the Command pattern.
    Edited by: jduprez on May 16, 2011 2:08 PM

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • Performance wise data representation

    Hello there,
    I would like to start discussion regarding performance wise solution implementation solution,
    From performance point of view, which choice would be better to be chosen among the following scenarios,
    1. one is to run a process run on every and each record in the a Huge table, for example with million records, for all the records,
    2. or to have another table which might have about 800 fields !! which might represents a map crossing the id values in the main Huge table, which must be filled based on the values in the main table!! as a pre-processing
    then instead of processing the Huge table, simple query the map table which could be indexed using the needed values,
    what is the order of causes of bad performance, a process on many records, or many records that represents a pre-processing
    many thanks

    Thank you Billy for replying,
    Billy  Verreynne  wrote:
    Nor is performance something that one looks at after the design is done and the code written. Performance is a primary factor that needs to be considered with the h/w bought, the s/w installed and configured, the design, and every single line of code written.yes, currently I am in the design phase so that I am trying to understand the major performance principles that might effect the software when dealing with huge amount of data , whether pre-processing would be better, and such implementation issues.
    Here is the case logically:
    The process I have mentioned in the post, corresponds to a procedure that must be applied on data from a table, then return a certain value, calculated on the fly.
    some of those processing might not needed to be done , so in order to avoid huge unnecessary operation I need to perform some kind of predicating or indexing based on certain values.
    what is the best practice for such scenarios, performance wise!
    Thanks

  • Processing in 2 internal tables -Performance wise better option

    Hi Experts,
    I have 2 internal tables.
    ITAB1 and ITAB2  both are sorted by PSPHI.
    ITAB1 has PSPHI  some more fields INVOICE DATE  and AMT
    ITAB2 has PSPHI  some more fields amount.
    Both itab1 and itab2 will always have same amount of data.
    I need to filter data from ITAB2 based invoice date given on selection screen.since ITAB2 doesnt have invoice date field.
    i am doing further processing to filter the records.
    I have thought of below processing logic and wanted to know if there is a better option performance wise?
    loop at ITAB1 into wa where invoice_date > selection screen date. (table which has invoice date)
    lv_index = sy-tabix.
    read table itab2 where psphi = wa-psphi and index = lv_index.
    if sy-subrc = 0.
    delete itab2 index lv_index.
    endif.
    endloop.

    Hi Madhu,
    My Requirement is as below could you please advice on this ?
    ITAB1
    Field   1 PSPHI ,    FIELD 2 INVOICE,  FIELD 3 INVOICE_DATE , FIELD4 AMT
                 15245,                       INV1,                           02/2011  ,  400
                  15245                       INV2                            02/2012  ,  430
    ITAB2
       Field   1 PSPHI ,    FIELD 2 PSNR,      FIELD 3 MATNR  , FIELD4 AMT
                 15245,                       PSNR1,                   X .          430
                  15245                       IPSNR2                    Y,          400
    When user enteres date on sel screen as 02/2011
    I want to delete the data from itab1 and itab2 for invoice date greater then 02/2011/
    If i delere ITAB1 for date > selection screen date.
    Loop itab1.
    delete itab2 where psphi in itab1 will delete both rows in above example because the field psphi which is common can be mutiple.
    endloop.
    Can you advice ?

  • Performance wise, a select statement is faster on a view or on a table??

    Performance wise, a complex (with multi join) select statement is faster on a view or on a table??

    Hi,
    the purpose of a view is not to provide performance benefits, it's basically a way to better structure database code and data access. A view is nothing but a stored query. When the optimizer sees references to a view in a query, it tries to merge it (i.e. replace the view with its definition), but it some cases it may be unable to do so (in presence of analytic functions, rownum pseudocolumn etc.) -- in such cases views can lead to a performance degradation.
    If you are interested in performance, what you need is a materialized view, which is basically a table built from a query, but then you need to decide how you would refresh it. Please refer to the documentation for details.
    Best regards,
    Nikolay

  • Performance wise which is better NOT IN or NOT EXISTS

    Performance wise which is better NOT IN or NOT EXISTS.

    also that not exists is not equivalent to not in
    SQL> select * from dept where not exists (select * from emp where dept.deptno=comm);
        DEPTNO DNAME          LOC
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            40 OPERATIONS     BOSTON
    SQL> select * from dept where deptno not in (select comm from emp);
    no rows selectedin most case, Oracle internally rewrite a NOT IN in a NOT EXISTS, because NOT EXISTS usually performs better.
    HTH
    Laurent

  • BO4 - The action cannot be performed WIS 30650 when try open some WEBI docs

    Get message BO4 - The action cannot be performed WIS 30650 when try open some WEBI docs.
    Documents created brand new in BI 4.0 .
    Some docs open fine.
    Get message even when log in as Administrator.
    Windows 2008 Server Oracle 11.2.0.1
    Any idea what causes this.
    Many Thanks

    Hi,
    Thanks for reponse.
    Distributed environment 1 server hosts web application server (Tomcat), 1 server hosts BOE components.
    Contacted BO support and we did right click on document/modify and get error message you referred to.
    However - also get this message when try right click modify existing web intelligence sample reports or even try open which ship with the product and have never been modified.
    Checked that file referred to exist in Input frs and does.
    Logged in as Administrator so not permission-realted.
    Thi seems like a huge bug to me that can't vene open the webi sample reports which ship with the product.
    servers look fine - I'm only one using and can create ans save reports fine just openiong or modifying some later or some of the sample ones.
    Our FRS is on networked location and uses UNC path - I'm assuming this has no impcat.
    Many Thanks

  • Aperture Conversion - My wife is converting from iPhoto to Aperture due to large library ( 33,000 photos, 109GB).  Performance-wise, is it better to convert to Aperture library and leave on the 250GB internal drive or convert and store externally?

    My wife is converting from iPhoto to Aperture due to large library ( 33,000 photos, 109GB).  Performance-wise, is it better to convert to Aperture library and leave on the 250GB internal drive or convert and store externally?

    You are welcome.
    convert and store externally?
    What versions of iPhoto and Aperture is your wife using? With both iPhoto 9.3 or later and Aperture 3.3 or later she simply could open her iPhoto library in Aperture and be done, since these versions are using a unified library format.
    Aperture 3.3: Using a unified photo library with iPhoto and Aperture

  • Which is Performance wise better MOVE or MOVE CORRESPONDING ...

    Hi SAP-ABAP Experts .
    Which is Performance wise better ...
    MOVE or MOVE CORRESPONDING .
    Regards : Rajneesh

    > A 
    >
    > * access path and indexes
    Indexes and hence access paths are defined when you design the data model. They are part of the model design.
    > * too large numbers of records or executions
    consider a datawarehouse environment - you have to deal with huge loads of data. a Million records are considered "small" here. Terms like "small" or "large"  are depending on the context you are working in.
    If you never heard of Star transformation, Partitioning and Parallel Query you will get lost here!
    OLTP is different: you have short transactions, but a huge number of concurrent users.
    You would not even consider Bitmap indexes in an OLTP environment - but maybe a design that evenly distributes data blocks over several files for avoiding hot spots on heavily used tables.
    > * processing of internal tables => no quadratic coding
    >
    > these are the main performance issues!
    >
    > > Performance is defined at design time
    > partly yes, but more is determined during runtime, you must check everything at least once. Many things can go wrong and will go wrong.
    sorry, it's all about the data model design - sure you have to tune later in the development but you really can't tune successfully on a  BAD data model ... you have to redesign.
    If the model is good there is a chance the developers chooses the worst access to it , but then you have the potential to tune with success because your model allows for a better access strategy.
    The decisions you make in the design phase are detemining the potential for tuning later.
    >
    > * database does not what you expect
    I call this the black box view: The developer is not interested in the underlying database.
    Why we have different DB vendors if they would all behave the same way? I.e. compare concurrency
    and consistency implementations in various DB's - totally different.  You can't simply apply your working knowledge from one database to another DB product. I learned the hard way while implementing on INFORMIX and ORACLE...

  • Difference betweem temp table and CTE as performance wise?

    Hi Techies,
    Can anyone explain CTE and Temp table performance wise. Which is the better object to use while implementing DML operations.
    Thanks in advance.
    Regards
    Cham bee

    Welcome to the world of performance tuning in SQL Server! The standard answer to this kind of question is:
    It depends.
    A CTE is a logical construct, which specifies the logical computation order for the query. The optimizer is free to recast computation order in such away that the intermediate result from the CTE never exists during the calculation. Take for instance this
    query:
    WITH aggr AS (
        SELECT account_no, SUM(amt) AS amt
        FROM   transactions
        GROUP  BY account_no
    SELECT account_no, amt
    FROM   aggr
    WHERE  account_no BETWEEN 199 AND 399
    Transactions is a big table, but there is an index on account_no. In this example, the optimizer will use that index and only compute the total amount for the accounts in the range. If you were to make a temp table of the CTE, SQL Server would have no choice
    to scan the entire table.
    But there also situations when it is better to use a temp table. This is often a good strategy when the CTE appears multiple times in the query. The optimizer is not able to pick a plan where the CTE is computed once, so it may compute the CTE multiple times.
    (To muddle the waters further, the optimizers in some competing products have this capability.)
    Even if the CTE is only referred to once, it may help to materialise the CTE. The temp table has statistics, and those statistics may help the optimizer to compute a better plan for the rest of the query.
    For the case you have at hand, it's a little difficult to tell, because it is not clear to me if the conditions are the same for points 1, 2 and 3 or if they are different. But the second one, removing duplicates, can be quite difficult with a temp table,
    but is fairly simple using a CTE with row_number().
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Stock Ledger Report in Day Wise not giving correct values for Opening Stock

    Dear Experts,
    I m working on Sock ledger report to give the day wise data.
    since yesterdays closing Stock will become opening stock of today,
    To get Opening Stock,
    I have restricted the stock key figure with 2 variables on calday        
                                  (DATE FROM var with <=(Lessthan or equal to) and offset -1
                                   DATE TO      var with <=(Lessthan or equal to) and offset -1)
    To get Closing Stock,
    I have restricted the Stock key figure with 2 variables on calday        
                                  (DATE FROM var with <=(Lessthan or equal to)
                                   DATE TO      var with <=(Lessthan or equal to) )
    But in the output Opening stock values are not coming correctly and for given range of dates,
    for last date, opening stock is showing as Zero.
    Could you please tell me how can I achieve the correct values for opening stock.
    Thanks in advance.

    Hi Arjun,
    Seems like you are making it more complicated. What is your selection screen criteria?
    Ideally you should only use the offset.
    You will have say Calday in rows and stock in Column
    ____________Opening Stock_____________Closing Stock
    01/06/2009___(Closing stock of 31/05/2009)_(Stock of 01/06/2009)
    02/06/2009___(Closing stock of 01/06/2009)_(Stock of 02/06/2009)
    03/06/2009___(Closing stock of 02/06/2009)_(Stock of 03/06/2009)
    So, from above scenario, create one RKFs and include Calday in it. Create a replacement path variable on calday and apply the offset as -1.
    So, your Opening Stock will be calculated by closign stock of previous day.
    - Danny

  • What is the best way to keep your macbook pro in tip top condition. performance wise

    What is the best way to keep the performance of a macbook pro in tip top shape.  Over the years my computer seems to act like a pc with all of its hicups and lockups.
    I am running mountain lion and this computer is approx 2 years old.
    Not sure if there is some sort of software that will help with this or is there something else I can do.
    Thanks
    GAJ

    How to maintain a Mac
    1. Make redundant backups, keeping at least one off site at all times. One backup is not enough. Don’t back up your backups; all should be made directly from the original data. Don’t rely completely on any single backup method, such as Time Machine. If you get an indication that a backup has failed, don't ignore it.
    2. Keep your software up to date. In the App Store or Software Update preference pane (depending on the OS version), you can configure automatic notifications of updates to OS X and other Mac App Store products. Some third-party applications from other sources have a similar feature, if you don’t mind letting them phone home. Otherwise you have to check yourself on a regular basis.
    Keeping up to date is especially important for complex software that modifies the operating system, such as device drivers. Before installing any Apple update, you must check that all such modifications that you use are compatible. Incompatibility with third-party software is by far the most common cause of trouble with system updates.
    3. Don't install crapware, such as “themes,” "haxies," “add-ons,” “toolbars,” “enhancers," “optimizers,” “accelerators,” "boosters," “extenders,” “cleaners,” "doctors," "tune-ups," “defragmenters,” “firewalls,” "barriers," “guardians,” “defenders,” “protectors,” most “plugins,” commercial "virus scanners,” "disk tools," or "utilities." With very few exceptions, such stuff is useless or worse than useless. Above all, avoid any software that purports to change the look and feel of the user interface.
    It's not much of an exaggeration to say that the whole "utility" software industry for the Mac is a fraud on consumers. The most extreme examples are the "CleanMyMac" and “MacKeeper” scams, but there are many others.
    As a rule, the only software you should install is that which directly enables you to do the things you use a computer for, and doesn't change the way other software works.
    Safari extensions, and perhaps the equivalent for other web browsers, are a partial exception to the above rule. Most are safe, and they're easy to get rid of if they don't work. Some may cause the browser to crash or otherwise malfunction.  Some are malicious. Use with caution, and install only well-known extensions from relatively trustworthy sources, such as the Safari Extensions Gallery.
    Never install any third-party software unless you know how to uninstall it. Otherwise you may create problems that are very hard to solve. Do not rely on "utilities" such as "AppCleaner" and the like that purport to remove software.
    4. Don't install bad, conflicting, or unnecessary fonts. Whenever you install new fonts, use the validation feature of the built-in Font Book application to make sure the fonts aren't defective and don't conflict with each other or with others that you already have. See the built-in help and this support article for instructions. Deactivate or remove fonts that you don't really need to speed up application launching.
    5. Avoid malware. Malware is malicious software that circulates on the Internet. This kind of attack on OS X was once so rare that it was hardly a concern, but malware is now increasingly common, and increasingly dangerous.
    There is some built-in protection against downloading malware, but you can’t rely on it — the attackers are always at least one day ahead of the defense. You can’t rely on third-party protection either. What you can rely on is common-sense awareness — not paranoia, which only makes you more vulnerable.
    Never install software from an untrustworthy or unknown source. If in doubt, do some research. Any website that prompts you to install a “codec” or “plugin” that comes from the same site, or an unknown site, is untrustworthy. Software with a corporate brand, such as Adobe Flash Player, must come directly from the developer's website. No intermediary is acceptable, and don’t trust links unless you know how to parse them. Any file that is automatically downloaded from the web, without your having requested it, should go straight into the Trash. A web page that tells you that your computer has a “virus,” or that anything else is wrong with it, is a scam.
    In OS X 10.7.5 or later, downloaded applications and Installer packages that have not been digitally signed by a developer registered with Apple are blocked from loading by default. The block can be overridden, but think carefully before you do so.
    Because of recurring security issues in Java, it’s best to disable it in your web browsers, if it’s installed. Few websites have Java content nowadays, so you won’t be missing much. This action is mandatory if you’re running any version of OS X older than 10.6.8 with the latest Java update. Note: Java has nothing to do with JavaScript, despite the similar names. Don't install Java unless you're sure you need it. Most people don't.
    6. Don't fill up your boot volume. A common mistake is adding more and more large files to your home folder until you start to get warnings that you're out of space, which may be followed in short order by a boot failure. This is more prone to happen on the newer Macs that come with an internal SSD instead of the traditional hard drive. The drive can be very nearly full before you become aware of the problem.
    While it's not true that you should or must keep any particular percentage of space free, you should monitor your storage use and make sure you're not in immediate danger of using it up. According to Apple documentation, you need at least 9 GB of free space on the startup volume for normal operation.
    If storage space is running low, use a tool such as OmniDiskSweeper to explore the volume and find out what's taking up the most space. Move seldom-used large files to secondary storage.
    7. Relax, don’t do it. Besides the above, no routine maintenance is necessary or beneficial for the vast majority of users; specifically not “cleaning caches,” “zapping the PRAM,” "resetting the SMC," “rebuilding the directory,” "defragmenting the drive," “running periodic scripts,” “dumping logs,” "deleting temp files," “scanning for viruses,” "purging memory," "checking for bad blocks," "testing the hardware," or “repairing permissions.” Such measures are either completely pointless or are useful only for solving problems, not for prevention.
    To use a Mac effectively, you have to free yourself from the Windows mindset that every computer needs regular downtime maintenance such as "defragging" and "registry cleaning." Those concepts do not apply to the Mac platform. A computing device is not something you should have to think about very much. It should be an almost transparent medium through which you communicate, work, and play. If you want a machine that is always whining for your attention like a neurotic dog, use a PC.
    The very height of futility is running an expensive third-party application called “Disk Warrior” when nothing is wrong, or even when something is wrong and you have backups, which you must have. Disk Warrior is a data-salvage tool, not a maintenance tool, and you will never need it if your backups are adequate. Don’t waste money on it or anything like it.

  • Oracle packages & performance

    can anybody give me a clear cut idea, whether performance of oracle changes or remains same when a oracle package body created with forward declaration of sub programs than with out forward declaration.
    Ex.
    create package a as
    procedure b;
    procedure c;
    end a;
    create package body a as
    procedure x;
    procedure y;
    procedure x is
    begin
    end x;
    procedue b is
    begin
    procedure x;
    end b;

    Could you give more explainations

Maybe you are looking for