Data growth

HI all
thnks in advance
Pls let me know how to get the data growth per day for the last 1 or 2 weeks on 10g

Hi,
Pls let me know how to get the data growth per day for the last 1 or 2 weeks on 10gIn you have a license for performnce pack and diagnostoc pack, see dba_hist_seg_stat.
Otherwise, you need to create STATSPACK extenion tables to track table growth. Here is how I do it:
http://www.dba-oracle.com/t_database_growth_reports.htm
Hope this helps . . .
Donald K. Burleson
Oracle Press author
Author of "Oracle Tuning: The Definitive Reference"
http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

Similar Messages

  • EHS data growth

    Dear all,
    I am using EHS Hazardous Substance Management. I just wonder does anyone experience some serious data growth issue?
    I have about 100,000 specification in ESTRH table and over 380,000 records in ESTVH (the class) and over 13,500,000 records in each of the following  ESTVA, ESTDR, ESTVP, ESTDU.  ESTDF and STXL are the worse since it has over 20,000,000 records and take up a lot space due to you log raw field to store the log. AUSP has over 40,000,000 record store the value. 
    I am thinking that is something wrong with configuration and the design in the functional side because I never heard people talking about the issue in EHS. I wonder whether any people experience anything like that? I am not sure where I should start to look at.  Thanks.

    Hello
    some further infos now; the rest next week (hopefully):
    1.) Release ECC/SP14: Support Packages contain in the area of EH&S in most cases only bug fixes. Only if there is a major legal change then a new functionality will be available. One example is the new modules EH&S SVT which is delivered with an Support Package. Generally these (the Support Packages) are not well "described" (what do they fix etc.) and so it is not easy to detect the difference between SP x and SP Y.
    2.) EH&S general: Take a look here: https://wiki.sdn.sap.com/wiki/display/ESpackages/SAPERP6.0
    You will find documentation regarding changes (really enhancements) in the area of EH&S. Most changes are coming with Enhancement Package 3 which can be installed on top of ECC 6.0. Some really new and thery nice featuers are announced with Enhancement Package 5. Regarding this package I am not sure regarding the actual status. To my knowledge it is not released yet; I believe it is still in the ramp up phase but I am not sure regarding this. You can ask your SAP counterpart and there are some press releases etc. regarding this (I know there is information available in the DSAG (German SAP user group) and may be other SAP user groups). Keep in mind that SAP is going on with their release strategy; therefore you do have the right release; not all SAP customers are still using ECC 6.0 and in most cases their is no downgrade available regarding important new EH&S functionality.
    3.)  50% of the performance of your SAP system: a great number: my assumptions are: there is place for improvement but you will stick with something like 25% at least;  please analyse the performance issue a little bit deeper; my assumption would be that some indices in some of the EH&S tables used often could help; you should avoid to use a search like "anilin" in your system; something like "anilin" is ok.
    4.) Use EH&S to generate a MSDS, a label and Bil l of lading:
    If you use content as decribed by you, you are from the legal point of view on the "right" side. The general rules which are applied and which I know create per REAL_SUB the value assignment; therefore this gives one reason they your tables "explode", EH&S does have "better/other" features which you could think about in the sense of use (and reduction of data). But keep in mind: a change of data maintenance strategy is not easy (effects system set up and training of users etc.) and will take time (and budget)
    a.) You could use "REAL_GRPs" which can be linked to REAL_SUBs. Two "extreme" strategies can be used to reduce the number of data to be stored. You could create one REAL_GRP per "Class" and data. This REAL_GRP contains the necessary data only once. Now you need only to create the link between REAL_SUB to REAL_GRP and voila you could generate a MSDS and a label (you need to do a lot of referencing  but it will work). to give you an example: you could create one REAL_GRP providing EU classification/labeling combining one labeling. In doing so you reduce the number of data recors in ESTVT, ESTVA etc.
    b.) you could try to reduce the number of REAL_GRPs to be handled (so not to genrate on REAL_SUB containing data in only one classe but a number of classes whcih are "logically" connected not per class but per group of classes). General based on good working practice such groups can be established
    c.) You could use instead of "referencing" and use of REAL_GRP the inheritance. This does have more flexibility in comparison to the use of reference.  But the performace is slower and their is some need to train the users etc.
    But the rule set approach is a good working practise too.
    CDHDR; CDPOS yes EH&S will explode the size of these tables. Any data maintenance done is stored as a change in these tables. AUSP: no doubt that this table will explode; the reason is "simple": There is a lot of data to be handled in the SAP system compared with other SAP modules
    EHS Tables duplicates: the eh&s data model is "special" in comparison to other SAP modules. So yes from the first point of view you could believe: there is a duplicate of data in the database but if you analyse the data model of EH&S you will find that exactly this is required by the EH&S software. And if you would use further standard SAP techniques the number of data entries will explode more (use of change numbers is possible in EH&S too). Refer to http://help.sap.com/saphelp_erp60_sp/helpdata/en/a5/3adda043be11d188fe0000e8322f96/frameset.htm and to will find a link to SAP module "Engineering Change Management (LO-ECH)" which could be used in combination with EH&S.
    You should run RC1PHDEL only if you have clarified:
    - How you can analyse the data which is stored (somethere) later
    and the use of this report is only of interest if the data is changing rapidly and especially you delete ! very often data because only this "logically" deleted data is deleted physically from database using this report (the performance difference should be great after using this report).
    I will take a look in EH&S archiving. If I have specific new informations I will come back here.
    With best regards
    Christoph
    PS: As explained by you 99% of the specification are of type REAL_SUB. This is a high  number. Do you use BOMBOS interface? Which data model do you use (how many materials are linked to one REAL_SUB)? Is there one material per REAL_SUB? Normally I would assume a much higher number of LIST_SUBs to generate data regarding MSDS (based on data on REAL_SUB level). How many LIST_SUBs do you use? 1000 ?
    PPS: why do you have a such high amount of RFC calls which originates from EH&S? You are using EH&S Exper trules  I assume? Planned as a job (starting the rules in background)?
    There is no evidence to my knowledge that SAP EH&S does need a high amount of CPU etc. in the area of the SPOOL (printing of MSDS and labeL). How many MSDS and label do you print per day?
    How many WWI servers are linked to SAP system (this is linked again to the question of the high amount of RFC calls in EH&S area) and how many new WWI documents need to be generated roughly per day (or per week or per month)?

  • Data Growth Pattern

    Hi,
    Is there a way to identify the data growth of a table or tables ?
    Thanks,
    JK

    With 10G release, you can use DBMS_SPACE.OBJECT_GROWTH_TREND (http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm#sthref2180).

  • Data source

    Hi
    This is my first implementation project and I want to be doubly sure about the choice of data source. I willl be using copy of  0BCS_C10 cube as a data basis. I will be extracting data from FAGLFLEXT table and for which I will be using 0SEM_BCS_10 as the data source in terms of note 852971. I further understand that this is not capable of delta update as per note 1008953 meaning I need to do a full upload to the cube. My concern is performance issue as the day passes due to data growth. Appreciate your thoughts and inputs based on your experience.
    Regards

    Hi,
    You have to activate this datasource in your ECC source system (RSA5 - or RSA6 t-codes), then replicate it in your BW-myself system. Then you'll see it. You have to do (create) all the elements in the chain Datasource - DSO - cube (with all transformations) manually.
    Besides, there is a lot of work with proper configuring the DS in the ECC system in order to fill the DS correctly.

  • Lock Up Your Data for Up to 90% Less Cost than On-Premises Solutions with NetApp AltaVault

    June 2015
    Explore
    Data-Protection Services from NetApp and Services-Certified Partners
    Whether delivered by NetApp or by our professional and support services certified partners, these services help you achieve optimal data protection on-premises and in the hybrid cloud. We can help you address your IT challenges for protecting data with services to plan, build, and run NetApp solutions.
    Plan Services—We help you create a roadmap for success by establishing a comprehensive data protection strategy for:
    Modernizing backup for migrating data from tape to cloud storage
    Recovering data quickly and easily in the cloud
    Optimizing archive and retention for cold data storage
    Meeting internal and external compliance regulations
    Build Services—We work with you to help you quickly derive business value from your solutions:
    Design a solution that meets your specific needs
    Implement the solution using proven best practices
    Integrate the solution into your environment
    Run Services—We help you optimize performance and reduce risk in your environment by:
    Maximizing availability
    Minimizing recovery time
    Supplying additional expertise to focus on data protection
    Rachel Dines
    Product Marketing, NetApp
    The question is no longer if, but when you'll move your backup-and-recovery storage to the cloud.
    As a genius IT pro, you know you can't afford to ignore cloud as a solution for your backup-and-recovery woes: exponential data growth, runaway costs, legacy systems that can't keep pace. Public or private clouds offer near-infinite scalability, deliver dramatic cost reductions and promise the unparalleled efficiency you need to compete in today's 24/7/365 marketplace.
    Moreover, an ESG study found that backup and archive rank first among workloads enterprises are moving to the cloud.
    Okay, fine. But as a prudent IT strategist, you demand airtight security and complete control over your data as well. Good thinking.
    Hybrid Cloud Strategies Are the Future
    Enterprises, large and small, are searching for the right blend of availability, security, and efficiency. The answer lies in achieving the perfect balance of on-premises, private cloud, and public services to match IT and business requirements.
    To realize the full benefits of a hybrid cloud strategy for backup and recovery operations, you need to manage the dynamic nature of the environment— seamlessly connecting public and private clouds—so you can move your data where and when you want with complete freedom.
    This begs the question of how to integrate these cloud resources into your existing environment. It's a daunting task. And, it's been a roadblock for companies seeking a simple, seamless, and secure entry point to cloud—until now.
    Enter the Game Changer: NetApp AltaVault
    NetApp® AltaVault® (formerly SteelStore) cloud-integrated storage is a genuine game changer. It's an enterprise-class appliance that lets you leverage public and private clouds with security and efficiency as part of your backup and recovery strategy.
    AltaVault integrates seamlessly with your existing backup software. It compresses, deduplicates, encrypts, and streams data to the cloud provider you choose. AltaVault intelligently caches recent backups locally while vaulting older versions to the cloud, allowing for rapid restores with off-site protection. This results in a cloud-economics–driven backup-and-recovery strategy with faster recovery, reduced data loss, ironclad security, and minimal management overhead.
    AltaVault delivers both enterprise-class data protection and up to 90% less cost than on-premises solutions. The solution is part of a rich NetApp data-protection portfolio that also includes SnapProtect®, SnapMIrror®, SnapVault®, NetApp Private Storage, Cloud ONTAP®, StorageGRID® Webscale, and MetroCluster®. Unmatched in the industry, this portfolio reinforces the data fabric and delivers value no one else can provide.
    Figure 1) NetApp AltaVault Cloud-integrated Storage Appliance.
    Source: NetApp, 2015
    The NetApp AltaVault Cloud-Integrated Storage Appliance
    Four Ways Your Peers Are Putting AltaVault to Work
    How is AltaVault helping companies revolutionize their backup operations? Here are four ways your peers are improving their backups with AltaVault:
    Killing Complexity. In a world of increasingly complicated backup and recovery solutions, financial services firm Spot Trading was pleased to find its AltaVault implementation extremely straightforward—after pointing their backup software at the appliance, "it just worked."
    Boosting Efficiency. Australian homebuilder Metricon struggled with its tape backup infrastructure and rapid data growth before it deployed AltaVault. Now the company has reclaimed 80% of the time employees formerly spent on backups—and saved significant funds in the process.
    Staying Flexible. Insurance broker Riggs, Counselman, Michaels & Downes feels good about using AltaVault as its first foray into public cloud because it isn't locked in to any one approach to cloud—public or private. The company knows any time it wants to make a change, it can.
    Ensuring Security. Engineering firm Wright Pierce understands that if you do your homework right, it can mean better security in the cloud. After doing its homework, the firm selected AltaVault to securely store backup data in the cloud.
    Three Flavors of AltaVault
    AltaVault lets you tap into cloud economics while preserving your investments in existing backup infrastructure, and meeting your backup and recovery service-level agreements. It's available in three form factors: physical, virtual, and cloud-based.
    1. AltaVault Physical Appliances
    AltaVault physical appliances are the industry's most scalable cloud-integrated storage appliances, with capacities ranging from 32TB up to 384TB of usable local cache. Companies deploy AltaVault physical appliances in the data center to protect large volumes of data. These datasets typically require the highest available levels of performance and scalability.
    AltaVault physical appliances are built on a scalable, efficient hardware platform that's optimized to reduce data footprints and rapidly stream data to the cloud.
    2. AltaVault Virtual Appliances for Microsoft Hyper-V and VMware vSphere
    AltaVault virtual appliances are an ideal solution for medium-sized businesses that want to get started with cloud backup. They're also perfect for enterprises that want to safeguard branch offices and remote offices with the same level of protection they employ in the data center.
    AltaVault virtual appliances deliver the flexibility of deploying on heterogeneous hardware while providing all of the features and functionality of hardware-based appliances. AltaVault virtual appliances can be deployed onto VMware vSphere or Microsoft Hyper-V hypervisors—so you can choose the hardware that works best for you.
    3. AltaVault Cloud-based Appliances for AWS and Microsoft Azure
    For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, cloud-based AltaVault appliances on Amazon Web Services (AWS) and Microsoft Azure are key to enabling cloud-based recovery.
    On-premises AltaVault physical or virtual appliances seamlessly and securely back up your data to the cloud. If the primary site is unavailable, you can quickly spin up a cloud-based AltaVault appliance in AWS or Azure and recover data in the cloud. Usage-based, pay-as-you-go pricing means you pay only for what you use, when you use it.
    AltaVault solutions are a key element of the NetApp vision for a Data Fabric; they provide the confidence that—no matter where your data lives—you can control, integrate, move, secure, and consistently manage it.
    Figure 2) AltaVault integrates with existing storage and software to securely send data to any cloud.
    Source: NetApp, 2015
    Putting AltaVault to Work for You
    Four common use cases illustrate the different ways that AltaVault physical and virtual appliances are helping companies augment and improve their backup and archive strategies:
    Backup modernization and refresh. Many organizations still rely on tape, which increases their risk exposure because of the potential for lost media in transport, increased downtime and data loss, and limited testing ability. AltaVault serves as a tape replacement or as an update of old disk-based backup appliances and virtual tape libraries (VTLs).
    Adding cloud-integrated backup. AltaVault makes a lot of sense if you already have a robust disk-to-disk backup strategy, but want to incorporate a cloud option for long-term storage of backups or to send certain backup workloads to the cloud. AltaVault can augment your existing purpose-built backup appliance (PBBA) for a long-term cloud tier.
    Cold storage target. Companies want an inexpensive place to store large volumes of infrequently accessed file data for long periods of time. AltaVault works with CIFS and NFS protocols, and can send data to low-cost public or private storage for durable long-term retention.
    Archive storage target. AltaVault can provide an archive solution for database logs or a target for Symantec Enterprise Vault. The simple-to-use AltaVault management platform can allow database administrators to manage the protection of their own systems.
    We see two primary use cases for AltaVault cloud-based appliances, available in AWS and Azure clouds:
    Recover on-premises workloads in the cloud. For organizations without a secondary disaster recovery location, or for companies looking for extra protection with a low-cost tertiary site, AltaVault cloud-based appliances are key to enabling cloud-based disaster recovery. Via on-premises AltaVault physical or virtual appliances, data is seamlessly and securely protected in the cloud.
    Protect cloud-based workloads.  AltaVault cloud-based appliances offer an efficient and secure approach to backing up production workloads already running in the public cloud. Using your existing backup software, AltaVault deduplicates, encrypts, and rapidly migrates data to low-cost cloud storage for long-term retention.
    The benefits of cloud—infinite, flexible, and inexpensive storage and compute—are becoming too great to ignore. AltaVault delivers an efficient, secure alternative or addition to your current storage backup solution. Learn more about the benefits of AltaVault and how it can give your company the competitive edge you need in today's hyper-paced marketplace.
    Rachel Dines is a product marketing manager for NetApp where she leads the marketing efforts for AltaVault, the company's cloud-integrated storage solution. Previously, Rachel was an industry analyst for Forrester Research, covering resiliency, backup, and cloud. Her research has paved the way for cloud-based resiliency and next-generation backup strategies.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    You didn't say what phone you have - but you can set it to update and backup and sync over wifi only - I'm betting that those things are happening "automatically" using your cellular connection rather than wifi.
    I sync my email automatically when I have a wifi connection, but I can sync manually if I need to.  Downloads happen for me only on wifi, photo and video backup are only over wifi, app updates are only over wifi....check your settings.  Another recent gotcha is Facebook and videos.  LOTS of people are posting videos on Facebook and they automatically download and play UNLESS you turn them off.  That can eat up your data in a hurry if you are on FB regularly.

  • Database growth following index key compression in Oracle 11g

    Hi,
    We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
    What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
    ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
    Does anyone know if a rebuild with index key compression  will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    Cheers
    Theo

    Hello Theo,
    Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
    We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
    If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
    Regards
    Stefan

  • Oracle Apps 11i - Data Archival

    Hi,
    Has anyone done data archival on Oracle Apps? I would like to know if there are any best practices or any guidelines for the data archival.
    Kindly share your experience on data archival on Oracle Apps.
    Regards
    Sridhar M

    Hi;
    Please see:
    Oracle E-Business Suite Data Archival Strategy
    http://documents.club-oracle.com/downloads.php?do=file&id=1862
    Can We archive the EBS r12 tables data?
    Also see:
    http://it.dspmanagedservices.co.uk/blog-1/bid/60253/Managing-data-growth-on-E-Business-Suite-with-an-archiving-strategy
    Check this pdf
    Regard
    Helios

  • Oracle EBS Data Purging and Archival

    Hi,
    I would like to know if there is any tool available in market for Oracle EBS data purging and Archival?
    Thanks,

    yes, there are 3rd-party tool available which will apply a set of business rules (ie all data older than Nov.1, 2007) through the various Oracle modules implemented at a customer site.
    They are 3rd-party tools; You can go to Oracle.com and look in partners validated integration solutions. At the moment there are 2 partners offering such integrated solution:
    Solix EDMS Validated Integration with 12.1
    IBM Optim Data Growth Solution
    the only other solution is to hire OCS for a customized developed solution

  • Huge database Growth

    Hello Guys,
    We have been observing very huge database growth in PRD environment.
    We have to add atleast 25GB datafile weekly to TS PSAPSR3.
    I had a look at DB02 for TOP SIZES and TOP GROWTH.
    Owner     Name     Partition     Type     Tablespace     Size(MB)     Chg.Size/day     #Extents     #Blocks     Next Extent(MB)
    SAPSR3     LIPS          TABLE     PSAPSR3     21367.000     364.433     520     2734976     2.500
    SAPSR3     BSIS          TABLE     PSAPSR3     16460.000     277.667     442     2106880     10.000
    SAPSR3     CE11000          TABLE     PSAPSR3     16360.000     262.500     441     2094080     10.000
    SAPSR3     VBFA          TABLE     PSAPSR3     15402.000     265.133     425     1971456     10.000
    SAPSR3     GLPCA          TABLE     PSAPSR3     15171.000     259.867     425     1941888     10.000
    SAPSR3     FAGLFLEXA          TABLE     PSAPSR3     13738.000     232.667     399     1758464     10.000
    SAPSR3     ACCTIT          TABLE     PSAPSR3     12788.000     215.067     384     1636864     10.000
    SAPSR3     ARFCSDATA          TABLE     PSAPSR3     12350.000     410.400     380     1580800     2.500
    SAPSR3     RFBLG          TABLE     PSAPSR3     11433.000     194.667     363     1463424     2.500
    SAPSR3     CE41000_ACCT          TABLE     PSAPSR3     11177.000     184.000     360     1430656     10.000
    SAPSR3     VBAP          TABLE     PSAPSR3     9663.000     156.433     336     1236864     10.000
    SAPSR3     VBRP          TABLE     PSAPSR3     8308.000     140.800     313     1063424     2.500
    SAPSR3     FAGL_SPLINFO          TABLE     PSAPSR3     7960.000     135.200     308     1018880     20.000
    SAPSR3     MSEG          TABLE     PSAPSR3     7936.000     134.400     307     1015808     10.000
    SAPSR3     BSIS~0          INDEX     PSAPSR3     7488.000     132.267     300     958464     2.500
    SAPSR3     VBFA~0          INDEX     PSAPSR3     7304.000     123.533     299     934912     2.500
    SAPSR3     DBTABLOG          TABLE     PSAPSR3     7303.000     83.200     300     934784     10.000
    SAPSR3     COEP          TABLE     PSAPSR3     6991.000     119.467     293     894848     10.000
    SAPSR3     CE41000          TABLE     PSAPSR3     6144.000     91.733     279     786432     10.000
    SAPSR3     FAGLFLEXA~3          INDEX     PSAPSR3     6028.000     104.533     278     771584     2.500
    SAPSR3     FAGL_SPLINFO_VAL~0          INDEX     PSAPSR3     5702.000     98.133     273     729856     2.500
    SAPSR3     FAGLFLEXA~0          INDEX     PSAPSR3     5568.000     98.133     270     712704     2.500
    We have daily sales order of around 12000.
    I want to know why it growing at such alarming pace or atleast find the Transactions which are causing huge inserts and updates.
    Regards
    Abhishek

    Hi Abhishek,
    In addition to the above, a very interesting area to work upon periodically is Data Volume Management.
    SAP has released 6.3 version of this guide.
    Click on this link
    https://websmp101.sap-ag.de/~sapidb/011000358700005044382000E
    This guide covers almost all tables which have considerable data growth and what preventive actions can be taken to keep the total database size under control. Basically, this guide covers, Prevention, Aggregation, Deletion, Archiving areas.
    Coupled with the guide's recommendations with good space management activities like table reorgs would definitely keep the system away from performance issues due to database size.
    This is an on-going project at some customer places.
    Br,
    Venky

  • SAP Data Archiving

    I have question on Archiving Statistics.
    I get some statistical information in the spool file of the archive run. And also I can see some additional statistical information when clicked on the "Statistics" button under the SARA transaction.
    I am trying to understand the relation, primarily, among:
    1) Size of Archiving Session in MB
    2) Occupied Database space in MB
    3) Deleted Database space in MB
    Which of these numbers or a combination would reflect the actual disk space emptied by the Archiving run from Database?
    This information is critical for the upper management.
    Your help is highly appreciated.
    Regards,
    Srinivasa

    Hi Juan,
    Appreciate your inputs. Even though the performance improvements is the primary objective for implementing archiving, containing the data growth as well is an objective. I am not sure if there are any methodoligies to quantify the performance improvements resulting from archiving. Currently I am concerned about quantifying the containment of data growth as a result of archiving.
    I have been also looking at documentation on archiving statistics. SAP says data archiving does a 5:1 compression for non-cluster tables, and no further compression for cluster tables.
    When SAP is providing the statistical information (from the spool file of an archive session and also the "Statistics" button under the SARA transaction), I believe that should be CORRECT. And I am confused as to what the numbers in the statistics really reflect.
    Here are the numbers from a production run of CO_ML_IDX (which archives data from CKMI1 table):
    From the spool file the numbers are:
    Size of Archiving Session in MB   -    700.900
    Occupied Database Space in MB     - 10,827.569
    From the statistics display the numbers are:
    Disk Space         -   700.9
    DB Space (Write)   - 10827.57
    DB Space (delete)  -     0
    Did this archive run free up 3504.5 MB (which is 5*700.9)
    or 10827.57 MB (the DB Space written)? Should the "DB Space (Delete)" not reflect the space deleted from the database? If so it should have some non zero value.
    How are these three values related to actual disk space freed up in the database as a result of this archive run?
    Appreciate for sharing your findings.
    Regards,
    Srinivasa

  • RAID for Consumer PCs

    If you have an interest in using RAID, then the MS Word document attached to this post may be useful information.  Scroll all the way to the bottom for the attachment.
    RAID for Consumer PCs
    Table of Contents
    RAID
    RAID 0
    RAID 1
    RAID 5
    RAID 10 (0+1)
    INTEL Controller Support for Different RAID Configurations
    Background Information for Creating a RAID configuration.
    Considerations:
    Performance comparisons
    Creating a RAID 0 Array
    Creating a RAID 1 Array
    Creating a RAID 5 Array
    Creating a RAID 10 (0+1) Array
    Conversion from RAID 0 to RAID 5
    RAID 0 to RAID 5 Observations
    Other RAID Migrations
    *************DISCLAIMER***********
    RAID
    RAID is an acronym for Redundant Array of Inexpensive Disks and also commonly called Redundant Array of Independent Disks.  A RAID array is created when one or more hard disks or one or more solid state disks (SSD) are combined to form a logical volume using one of several different configurations.  Consumer level PCs typically use RAID 0, RAID 1, RAID 5 and occasionally RAID 10 (0+1).  The RAID configuration choice is dependent on the requirements for redundancy, speed and capacity.  The choice of a RAID configuration will be a compromise between speed, redundancy, capacity and cost.  The different RAID configuration options can be restricted by hardware limitations.
    RAID 0
    A RAID 0 volume is created when 2 hard disks are used and then data is spanned or striped across the different hard disks.  The process of spanning data across different hard disks is also called scatter loading.  By spreading data across multiple hard disks, a significant performance improvement can be gained.  However, should one of the hard disks fail, then the entire RAID 0 configuration becomes unusable.  A RAID 0 volume combines the capacity of the hard drives being used in the configuration.
    RAID 1
    A RAID 1 configuration is created when 2 hard disks are used and then data on hard drive 1 is replicated to hard drive 2.  The process is also called data mirroring.  This configuration provides for redundancy in the event of a single hard drive failure but at the expensive of a degradation to write performance as data has to be written to multiple hard drives.  Since the data is being replicated between two volumes the overall capacity is limited to that of one hard drive.
    RAID 5
    A RAID 5 configuration is similar to a RAID 0 array except RAID 5 deploys distributed parity also referred to as checksum data.  Blocks of data are stripped across three or more hard drives and each hard drive contains block level recreation data (parity).  In the event of a hard drive failure, parity will allow for the data to be accessed through a dynamic data creation process.  The downside to the dynamic data creation process is a reduction in performance until the failing hard drive can be replaced.  RAID 5 performs best for supporting read accesses as write operations will be slower while the parity blocks are being updated.  Parity data does consume disk space.  A three hard drive RAID 5 array will have about the same total data capacity as a two hard drive RAID 0 array.
    RAID 10 (0+1)
    A RAID 10 configuration also called RAID 0+1 can be implemented in two different methods depending on the hardware being used.  The below left configuration is a striped RAID 1 array being replicated and the below right configuration is a RAID 0 array being replicated.  A RAID 10 configuration can provide a RAID 0 performance level even with the loss of one hard disk.  The downside to RAID 10 is the reduced configuration capacity.
    INTEL Controller Support for Different RAID Configurations.
    The RAID configurations used in this document are based on the INTEL SATA controller.
    See the below table.
    NOTE: Even though your system board may have the appropriate Intel controller,
    not all system boards will provide RAID functionality.
    Background Information for Creating a RAID configuration.
    You can create a RAID configuration provided that:
    The PC cabinet can accommodate additional hard drive(s) if needed.
    The system board has the necessary open SATA port(s) if needed.
    A SATA controller that can support the desired RAID configuration.
    You can determine the status of your RAID configuration by using the Intel Rapid Storage Technology (IRST) software.  If you don’t have this software on your PC then you can download IRST from the Intel web site.  I recommend that you use the latest version available from Intel.  Background information can be obtained from the IRST User Guide.  When you launch IRST, the help topics are an excellent source of information.  You can access IRST by going into the Control Panel and select Intel Rapid Storage Technology. You can also access IRST from the lower right Task Bar location.  IRST should resemble a hard drive icon and normally it has a green check mark.  By default IRST is set as delayed startup so don’t expect to see it active right after boot up. The Intel images being used in this document are from IRST version 11.1.0.1006.
    If the SATA controller in your PC is not set to RAID then read this Microsoft article if you are running VISTA or Windows 7.  You need to run the MrFixIT script before you reboot your PC in the bios and make the SATA controller mode change to RAID.  The script will then allow Windows to choose the correct driver when you reboot your PC.  If you plan on using a boot drive image restore then be sure to run the script just prior to making taking your image backup.  That way your backup image is set to allow Windows to choose the correct SATA driver.
    Before creating any RAID configuration, always make backups of your data and image the boot hard disk to external media such as an USB drive.  Additionally, be sure that your imaging product boot disk is functional, particularly if the PC’s boot hard disk is part of the RAID configuration.  Its best to use a commercial hard disk imaging product as support and functionality is generally better than the “freebie” software.
    Be sure that your PC is running the latest available BIOS.  BIOS updates are used to update the Intel ROM firmware.  Additionally, check for hard drive and SSD firmware updates.
    Summary of preparation steps in priority order:
    Backup your data.
    Test out your recovery and restore procedures.
    Update the following: BIOS, hard drive firmware, SSD firmware
    Update IRST
    Execute the Microsoft MrFixIt if your PC is not set to RAID mode in the bios.
    Create an image of your boot hard drive.
    Considerations:
    After replacing a failed hard drive, don’t expect the rebuild process to be fast.  All of the data that existed on the failed hard drive must be either regenerated using parity data or replicated to the new hard drive.
    I recommend that you use an uninterruptable power supply (UPS) when using RAID 5. Cached write data needs to be written to hard drive in the event of a power failure to avoid the loss of data.  You might want to consider disabling write back cache if you are not using an UPS. There is a performance reduction by doing so at the expense of improving data integrity.
    If you need a RAID array over 2 TB then your PC needs: UEFI bios, 64 bit operating system, GPT formated hard drives.  Review this Microsoft article on Windows and GPT FAQs.
    Since a MBR formated array will limit the useable space to 2 TB it’s therefore best to use hard drives that are 1 TB or less for RAID 0, 5 and 10.
    Consider the data growth rate and the size of the array.  The Intel controller will limit the number of hard drives.  The size of the PC cabinet and available system board SATA ports will also be growth constraints.  It’s not uncommon for a business to experience an annual data growth rate of 20 percent.
    If you need a RAID solution beyond the typical consumer level RAID configurations, then you should review the RAID options available from HP.
    RAID technology in not infalible so you need to consider backups.  A voltage spike inside your PC could render the RAID unusable and unrecoverable.  Corrupted data or a virus are other reasons for keeping backups.  An external USB connected hard drive might be sufficient for backups.
    Throughly test your backup and restore software. Always keep more than one backup copy of your data.
    Performance comparisons:
    All of the hard drives benchmarked are Hitachi 1.5 TB SATA III hard drives connected as SATA II devices.  HD Tune was used to benchmark the seniaros using default settings.
    Configuration
    Average MB/s
    Maximum MB/s
    Single hard drive
    113
    152
    RAID 0
    222
    289
    RAID 1
    104
    142
    RAID 5
    221
    288
    RAID 10 (0+1)
    220
    274
    Creating a RAID 0 Array.
    If you are configuring the Windows boot drive into a RAID 0 array, then you need to use the Intel option ROM method for creating the array.  Tapping Cntl-i at boot up will get you into the Intel option ROM firmware setup utility.  Once the array has been created then boot up your image recovery disk and load the array from your image backup.
    If you are creating a RAID 0 data only array then you can use the IRST when running Windows to create the array.  You can also use the Intel option ROM firmware setup utility.  Even though this HP VISTA RAID setup article is dated, it does have some excellent information.
    Launch IRST.
    You can observe in the above image the status of the hard drives attached to the Intel SATA controller.  Now click on Create. Select Optimized Disk (RAID 0) then click on Next.
    Configure the RAID 0 array by selecting two hard drives of the same size and click on Next.
    Next click on Create Volume.
    A warning window will appear.  Click on OK.
    The new RAID volume is now created. However, you now need to use Windows Disk Management to ready the volume for use.
    Creating a RAID 1 Array.
    If you are configuring the Windows boot drive into a RAID 1 array, then you need to use the Intel option ROM method for creating the array.  Tapping Cntl-i at boot up will get you into the Intel option ROM firmware setup utility.  Once the array has been created then boot up your image recovery disk and load the array from your image backup.
    If you are creating a RAID 1 data only array then you can use the IRST when running Windows to create the array.  You can also use the Intel option ROM firmware setup utility.  Even though this HP VISTA RAID setup article is dated, it does have some excellent information.
    Launch IRST.
    You can observe the above status of the hard drives attached to the Intel SATA controller.  Now click on Create.
    Select Real-time data protection (RAID 1) and click NEXT.
    Configure the RAID 1 array by selecting two hard drives of the same size and click on Next.
    Next click on Create Volume.
    The new RAID volume is now created. However, you now need to use Windows Disk Management to ready the volume for use.
    Creating a RAID 5 Array.
    A RAID 5 array will require three to four hard drives. While it is possible to convert a RAID 0 to a RAID 5 array, I recommended that you consider building the RAID 5 array from scratch rather than use a conversion method.
    If you are configuring the Windows boot drive into the RAID 5 array, then you need to use the Intel option ROM method for creating the array.  Tapping Cntl-i at boot up will get you into the Intel option ROM firmware setup utility.  Once the array has been created then boot up your image recovery disk and load the array from your image backup.
    If you are creating a RAID 5 data only array then you can use the IRST when running Windows to create the array.  You can also use the Intel option ROM firmware setup utility.  Even though this HP VISTA RAID setup article is dated, it does have some excellent information.
    The following procedure will build the RAID 5 array with three hard drives using IRST.
    Launch IRST.
    During this create process, I will be using the last three hard drives
    listed in the above image under Storage System View.  Now click Create.
    Select Efficient data hosting and protection (RAID 5) and click
    On Next.
    Select the three hard drives for RAID 5 and click on Next.
    Review the volume creation selections then click on Create Volume.
    Review the final warning then click on OK.
    Now click on OK and review the final array status.
    Creating a RAID 10 (0+1) Array.
    A RAID 10 (0+1) array will require four hard drives.
    If you are configuring the Windows boot drive into the RAID 10 (0+1) array, then you need to use the Intel option ROM method for creating the array.  Tapping Cntl-i at boot up will get you into the Intel option ROM firmware setup utility.  Once the array has been created then boot up your image recovery disk and load the array from your image backup.
    If you are creating a RAID 10 (0+1) data only array then you can use the IRST when running Windows to create the array.  You can also use the Intel option ROM firmware setup utility.  Even though this HP VISTA RAID setup article is dated, it does have some excellent information.
    The following procedure will build the RAID 10 (0+1) array using four hard drives using IRST.
    Launch IRST.
    This create process will be using the last four hard drives listed in the above image under Storage System View.  Notice that the hard drives are not the same size.  Two of the hard drives are 1.5 TB and two hard drives are 2 TB.  While it’s recommended to use all hard drives of the same size, it’s not required. The RAID 10 creation progam will pick to two smallest hard drives for the striped pair and then the two largest hard drives for replication pair but not the opposite as the two largest hard drives if used as the striped pair would not fit on the two smaller hard drives for replication.  Now click Create.
    Select Balanced performance and data protection (RAID 10) and click
    on Next.
    Select the four hard drives for RAID 10 and click on Next.
    Notice that IRST is set to create a RAID 10 volume with the capacity of 2.7 TB.
    Review the volume creation selections then click on Create Volume.
    Review the final warning then click on OK.
    Now click on OK and review the final array status.
    Conversion from RAID 0 to RAID 5
    If you have a RAID 0 hard disk configuration and you are concerned that a hard drive failure will cause your PC to crash or results in data loss then you might have an option to use RAID 5.  A RAID 5 three drive configuration can survive a single hard drive failure but not two failing hard drives. While there are other RAID configurations possible, this document will only be addressing a three hard drive configuration using the Intel SATA controller.  Some of the newer HP PCs can accommodate three hard drives and can be configured when ordered with RAID 5.
    Observe the below image.  This PC has a RAID 0 2.7 TB array.  To build the RAID 5 array, you will need to add (configure) an additional hard drive into the array. Click on Manage and then add an eligible hard drive to the array.
    Note: all data on the hard drive to be added to the array will be lost since parity and data from the existing array needs to be written to the added hard drive to create the RAID 5 array.  Take backups of your existing array in case something goes wrong.
    Once you are on the Manage screen then click on Change type.
    The following screen will appear:
    Select the drive to be included into the array and click on OK.
    When the migration process begins, the Status is now indicating migrating and the Type is RAID 5.  The hard drive added was 2 TB which meets the minimum amount.  BE PATIENT!  The migration process will take a very long time for an in place migration to complete.
    It’s much faster to delete out the original RAID 0 volume, create the new RAID 5 volume and then reload the original RAID 0 image from your backups. I recommend that you consider
    this method verses the in place approach.
     Click on Status to show the migration progress.
    RAID 0 to RAID 5 Observations
    I was able to shut down and boot the RAID configuration before the migration process had completed.  The in place migration method was very slow, about 3% per hour and hence my recommendation to use a differnet method.  Booting up from different hard drive before the migration process has completed results in a BSOD on boot up.
    After completing the RAID 5 conversion, I did receive a message from IRST indicating that one or more volumes is protected against a hard drive failure.
    Other RAID Migrations
    While there are other RAID migrations options available, they can be platform (chipset) specific.   Review this Intel Chipset article on supported RAID migrations.  Even though the Intel RAID migration has a safety function built into the process in the event of a power loss or shutdown, it’s always best to have a complete set of up-to-date backups.
    The migration process can be painfully slow.  The migration time is largely dependent on the hard drive sizes and the number of hard drives involved in the overall migration.  In some cases it might be faster to build the RAID array from scratch and then load the data back to the array verses using an in place migration process.
    It is possible to increase the overall RAID array (volume) size with some RAID configurations by adding hard drives to the array.   The overall size of the array may be limited by how the array had been previously formatted by Windows.  Review the information under the Considerations topic in this document.
    *********updated August 21, 2013
    If you are using SSDs in a RAID 0 configuration then you will need to have the Intel 7 or 8 chipset plus Intel Rapid Storage Technology (IRST) version 11 or higher to get Windows TRIM support to function. The latest version of IRST as of 8/21/2013 that I have seen is 12.7.1036.
    *************DISCLAIMER***********
    There may be inaccuracies with the information contained in this document so please consider that when using RAID.
    *************DISCLAIMER***********
    HP DV9700, t9300, Nvidia 8600, 4GB, Crucial C300 128GB SSD
    HP Photosmart Premium C309G, HP Photosmart 6520
    HP Touchpad, HP Chromebook 11
    Custom i7-4770k,Z-87, 8GB, Vertex 3 SSD, Samsung EVO SSD, Corsair HX650,GTX 760
    Custom i7-4790k,Z-97, 16GB, Vertex 3 SSD, Plextor M.2 SSD, Samsung EVO SSD, Corsair HX650, GTX 660TI
    Windows 7/8 UEFI/Legacy mode, MBR/GPT
    Attachments:
    RAID for Consumer PCs.doc ‏3761 KB

    Great document
    I am a volunteer. I am not an HP employee.
    To say THANK YOU, press the "thumbs up symbol" to render a KUDO. Please click Accept as Solution, if your problem is solved. You can render both Solution and KUDO.
    The Law of Effect states that positive reinforcement increases the probability of a behavior being repeated. (B.F.Skinner). You toss me KUDO and/or Solution, and I perform better.
    (2) HP DV7t i7 3160QM 2.3Ghz 8GB
    HP m9200t E8400,Win7 Pro 32 bit. 4GB RAM, ASUS 550Ti 2GB, Rosewill 630W. 1T HD SATA 3Gb/s
    Custom Asus P8P67, I7-2600k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX660 2GB, 750W OCZ, 1T HD SATA 6Gb/s
    Custom Asus P8Z77, I7-3770k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX670 2GB, 750W OCZ, 1T HD SATA 6Gb/s
    Both Customs use Rosewill Blackhawk case.
    Printer -- HP OfficeJet Pro 8600 Plus

  • Cm:select performance problem with multiple likes query clause

    I have query like <br>
              <b>listItem like '*abc.xml*' && serviceId like '*xyz.xml*'</b><br>
              Can we have two likes clauses mentioned above in the cm:select. The above is executing successfully but takes too much time to process. <br><br>
              Can we simplify the above mentioned query or any solution. Please help me in this issue.<br><br>
              Thanks & Regards,<br>
              Murthy Nalluri

    A few notes:
    1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
    2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
    3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
    4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
    If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
    One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
    I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Sql query is taking more time

    Hi all,
    db:oracle 9i
    I am facing below query prob.
    prob is that query is taking more time 45 min than earliar (10 sec).
    please any one suggest me .....
    SQL> SELECT MAX (tdar1.ID) ID, tdar1.request_id, tdar1.lolm_transaction_id,
    2 tdar1.transaction_version
    3 FROM transaction_data_arc tdar1
    4 WHERE tdar1.transaction_name ='O96U '
    5 AND tdar1.transaction_type = 'REQUEST'
    6 AND tdar1.message_type_code ='PCN'
    7 AND NOT EXISTS (
    8 SELECT NULL
    9 FROM transaction_data_arc tdar2
    10 WHERE tdar2.request_id = tdar1.request_id
    11 AND tdar2.lolm_transaction_id != tdar1.lolm_transaction_id
    12 AND tdar2.ID > tdar1.ID)
    13 GROUP BY tdar1.request_id,
    14 tdar1.lolm_transaction_id,
    15 tdar1.transaction_version;
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1 Bytes=42)
    1 0 SORT (GROUP BY) (Cost=12 Card=1 Bytes=42)
    2 1 FILTER
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=1 Card=1 Bytes=42)
    4 3 INDEX (RANGE SCAN) OF 'NK_TDAR_2' (NON-UNIQUE) (Cost
    =3 Card=1)
    5 2 TABLE ACCESS (BY INDEX ROWID) OF 'TRANSACTION_DATA_ARC
    ' (Cost=5 Card=918 Bytes=20196)
    6 5 INDEX (RANGE SCAN) OF 'NK_TDAR_7' (NON-UNIQUE) (Cost
    =8 Card=4760)

    prob is that query is taking more time 45 min than earliar (10 sec).Then something must have changed (data growth/stale statistics/...?).
    You should post as much details as possible, how and what it is described in the FAQ, see:
    *3. How to improve the performance of my query? / My query is running slow*.
    When your query takes too long...
    How to post a SQL statement tuning request
    SQL and PL/SQL FAQ
    Also, given your database version, using NOT IN instead of NOT EXISTS might make a difference (but they're not the same).
    See: SQL and PL/SQL FAQ

  • SELECT query taking long time

    Hi All,
    I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
    Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
    What else I should check in order to figure out why my SELECT statement is taking time?
    Any help will be much appreciated.
    Thanks!

    Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
    Before you can even start optimizing, it should be clear what parts of the query are running slow.
    The links contains the steps to take regarding how to identify the things that make the query run slow.
    Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
    Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
    And you must indentify what exactly has changed, but it's a broad range you have to check:
    - it could be outdated table statistics
    - it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
    - it could be a table that got modified with some bad index
    - it could be ...
    So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself.

  • Collaborative Websites: Best Practice?

    Good morning! Afternoon? I'm starting to delve into more advanced topics in SharePoint and am aiming to make a collaborative website between various groups.
    I'm rather confused about the concept of site collections, wiki, etc.
    What I was hoping to create is a site with basic information: news, contacts, etc. But one tab on the navigation bar would lead to to a WIKI, and another to a personalizeable site. So I suppose I have two separate questions:
    1) Does the WIKI have to stand as its own site collection, under the same web application?
    2) Same question for the personalizeable site. Additionally, are there any resources you would recommend? I'm having a hard time finding things in 'beginner's' terms.
    Many thanks, my friends.
    Edit: To clarify on the personalizeable site, each person allowed editing rights on SharePoint site can have their own page (similar to Facebook profile) that they can add whatever their heart desires.

    Hi  Catherine,
    Firstly you need only one site collection to get your portal provisioned. However the sizing and no site collections are decided based on users, and volume of data growth.
    I would suggest you to create a team site collection, which is a collaboration template and you can explore possiblities of hosting blogs, with community sites.
    For Wiki, you can choose Wiki site template and can be a site under team site collection. Alternatively you could also Team site as wiki site by using wiki library.
    For personalizeable site, best option would My site, which is equivalent to Facebook, and offers more capabilities on the Enterprise social front.
    My Site Host itself is a site collection, so this would go into separate site collection. My Site has dependency on User profiles and other services, you may need to plan accordingly.
    here are links for reference -
    Overview of sites and site collections in SharePoint 2013
    http://technet.microsoft.com/en-us/library/cc262410(v=office.15).aspx
    Configure My Sites in SharePoint Server 2013
    http://technet.microsoft.com/en-us/library/ee624362(v=office.15).aspx
    Differences between Enterprise Wiki and Wiki Page Library in SharePoint 2013
    http://bernado-nguyen-hoan.com/2013/05/10/differences-between-enterprise-wiki-and-wiki-page-library-in-sharepoint-2013/
    Create and edit a wiki
    http://office.microsoft.com/en-us/office365-sharepoint-online-small-business-help/create-and-edit-a-wiki-HA102775321.aspx
    Plan sites and site collections in SharePoint 2013
    http://technet.microsoft.com/en-us/library/cc263267(v=office.15).aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

Maybe you are looking for

  • No GR/IR items cleared after GR/IR clearing by F.19 and SM35

    Hi experts, After I execute the GR/IR clearing program through F.19 and then post the batch through SM35, I found the GR/IR open items have not been cleared, even though they match the clearing conditions. I test to create a PO, make the invoice rece

  • How do i restore a project in final cut pro x?

    Hi, I want to revert to a previous version of my project from a few hours previous (I don't like the changes I have made). I know Final Cut Pro X autosaves but how do I go back in time to get to my previous saves?

  • Exception: Call of execute(String) is not allowed for PreparedStatement

    Hi all, This query was run fine on SapDB 7.4 from Jave code using prepareStatement()  method: declare id11216053819634 cursor for select sc.name, measuredobjectid id from serviceconditions sc, measuredobjects mo where sc.collectionid = mo.collectioni

  • Text "disappearing" on run of symfony build commands

    When I run a symfony build command using the Leopard Terminal.app, I get disappearing text as if the output is sending escape characters to change the text or something--but the color of the text is the same as the window, so it's effectively invisib

  • Siri can be activated with Assistive Touch?

    I was playing around with Assistive Touch and somehow activated Siri. Initially I thought it was because my hand got too close to the sensor that triggers the "Raise to Speak" function. Out of curiosity, I googled about Siri and Assistive Touch and a