Inconsistent amounts

Hi,
While converting parking invoice to invoice I get the following error:
Inconsistent amounts
Message no. F5704
Diagnosis
Item "0000000001" in the FI/CO document has the debit/credit indicator "H". Amounts in the FI/CO interface are assigned a +/- sign as follows:
Debit amounts are greater than or equal to zero
Credit amounts are less than or equal to zero
Not all the amounts in item "0000000001" meet this specification.
System Response
Data cannot be processed.
Procedure
This is a system error. It may be the case that one or more amount fields in the calling application were not filled correctly (as regards the above-mentioned specification).
What do you suggest to me in this case?

Hi,
check out this correction uploaded by sap.
OSS note 989252
https://websmp130.sap-ag.de/sap(bD16aCZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=989252
I hope this will solve your problem.
Regards,
Jams

Similar Messages

  • Cash Journal : Error "Inconsistent amounts" during a line items reversal

    Dear Experts,
    i have posted a line item in Cash Journal.
    now when i m trying to delete/reverse it, i m observing a error "Inconsistent amounts" that leads to a ABAP dumb "FMGL_CHECK_PERIODS_REV_REAL" i have dowloaded & applied a note from SAP 1247225, but could not fix the error.
    please advice & help me out.

    Hi Hussein
    In addition to the application of the note using SNOTE
    you would also need to perform the following manual steps (configuration)
    Call transaction SM30 and change the following entries in the table TRWPR:
    PROCESS  EVENT    SUBNO COMPONENT KZ_BLG FUNCTION
    DOCUMENT PREREV  190   EAFM             FMGL_CHECK_PERIODS_REV_REAL
    becomes
    DOCUMENT PREREV  190   EAFM             FMFA_CHECK_PERIODS_REV_REAL_CK
    DOCUMENT REVERSE  065  EAFM             FMGL_CHECK_PERIODS_REV_REAL
    becomes
    DOCUMENT REVERSE  065  EAFM             FMFA_CHECK_PERIODS_REV_REAL
    Regards
    Sach!n

  • Order related billing for partial payment and inconsistent amounts

    Hi,
    We are using billing plans with payment cards to capture payments before goods are delivered.  SAP doesn't seem to support credit cards for down payments, so we are entering the payment in the payment card and billing plan lines, then generating the F1 order-related billing.
    Sometimes, the dollar amount is prorated proportionately, sometimes, it is disproportionately.  For example, if we have 5 line items for $100, $200, $300, $400, $500 a total net value of $1500, and the customer makes a credit card payment of $500, we are seeing that sometimes this explodes to $33, 66, 99, 122, 155.  Sometimes, it explodes to $0, $0, $0, $0, $500 in the billing document.
    Where does SAP control the way the amounts are derived in the billing document?  This is milestone billing.
    Ideally, we would like this evenly applied across all conditions.  I really need to understand how SAP handles this.  The configuration indicates that the pricing is copied straight from the order.  I don't see anything custom in the document copy procedure here.
    I have had a difficult time finding anything in forums or on the internet in general, I've tried search terms such as "order related billing amount distribution" and "payment card amount allocation" as well as a few other terms but haven't found anything helpful.
    Thanks for any suggestions you may have.

    Hi,
    Have you check in copy control order to invoice
    At item level Pricing type  it should be - G
    Kapil

  • Incosistent Amounts while releasing to Acconting

    Hi All,
    When am creating S1 doc w.r.t F2 am getting this error.
    The error noted when we try ot release to accounting is as follows: "Inconsistent amounts
    Message no. F5 704
    Diagnosis
    Item "0000000001" in the FI/CO document has the debit/credit indicator " S". Amounts in the FI/CO interface are assigned a +/- sign as follows:
    Debit amounts are greater than or equal to zero
    Credit amounts are less than or equal to zero
    Not all the amounts in item "0000000001" meet this specification.
    System Response
    Data cannot be processed.
    Procedure
    This is a system error. It may be the case that one or more amount fields in the calling application were not filled correctly (as regards the above-mentioned specification)."
    Could anyone help me out regarding this issue?
    Regards,
    Raghu.

    Raghu, ran into this once before and got the following advice via OSS ...
    I can propose you an easy way to prevent this problem in the future:
    You can prevent the summarization of debit and credit tax line items
    from SD by implementing the following FI substitution at callup-point
    2 (-> line item level) in transaction OBBH:
    If the FI document is created from a SD billing document
    (-> prerequisite: BKPF-AWTYP = 'VBRK') and the currently
    processed line item is an output tax line item (-> prerequisite:
    BSEG-MWART = 'A'), the text (-> BSEG-SGTXT) of debit tax line
    items (-> prerequisite: BSEG-SHKZG = 'S') can be filled with
    a different constant value (for example 'debit output tax')
    like the text (-> BSEG-SGTXT) of credit tax line items
    (-> prerequisite: BSEG-SHKZG = 'H') (for example 'credit
    output tax').
    In that way debit lines will get different texts than credit lines and
    will no longer be summarized.
    You could decide if this substitution should be used for all processes
    or only activated for those problematical invoices.
    It helped us solve the issue at the time, so hopefully you will get same +ve result. Please revert if you need more.
    Thanks,
    Jay

  • Deleting Files in MacBookPro - Yosemite sending "FREE SPACE" to "Other"

    I have the weirdest problem.
    I'm running a Macbook Pro Late 2013 with OS X Yosemite 10.10.1.
    The MAC came with a 500GB SSD Drive.
    I have about 60GB of free space on my Mac. I need to restore a large file that is about 60GB.
    So I thought to myself, "I'll just delete files I don't need".
    I went ahead and deleted a parallels VM that was about 60GB in size. But I was shocked when I checked "About this Mac": no free space had been added!
    I went ahead and did all the usual stuff. Empty trash, restart the MAC, repair permissions using "Disk Utility", restart the mac again, - but no matter what I did, I could not get the 60GB of free space from the file I had just deleted.
    I downloaded OmniDisk Sweeper and Disk Doctor, but none of these applications where able to either get my 60GB back, or at least point me to where the 60GB has gone.
    So I decided to disable my time machine. This usually gets rid of the backup cache. "About this Mac" showed that I would get about 50GB free space by doing this. After doing this, I was shocked to see that my free space had only changed negligibly - from 60GB to 64GB. Deleting 50GB of backups only added 4GB to my free space.
    However during all of this, I noticed my "Other" data in "About this Mac" had grown considerably from about 90GB to about 206GB. It seems to me that all that space I have free'd up has somehow gone into the "Other" category instead of going to the "Free Space" category.
    Please note my trash can is completely empty, and I have repaired permissions using Disk Utility at least twice with no effect.
    I need to get my free space back so I can restore my virtual machine and get back to work.
    Please help!!!

    How much space does it say is available when you click on the HDD icon on your desktop COMMAND + I?
    Hoe does this compare with OmniDiskSweeper?
    You might have to reindex Spotlight  if you are seeing inconsistent amounts.
    Ciao.

  • System error: Inconsistency between Credit/debit indicator and amounts

    Hi All,
    I am getting this error as while trying to release a billing document to accounting. The Message no. is  F5693
    Please help!
    Thanks,
    Anupriya

    Dear Kaushik,
    System is triggering this error from following includes.
    LFACIF4T
    LFACIF4Z
    IF ACCCR_FI-WRBTR LT 0
      OR ACCCR_FI-SKFBT LT 0
    OR ACCCR_FI-BUALT LT 0
      OR ACCCR_FI-WSKTO LT 0
      OR ACCCR_FI-KZBTR LT 0
      OR ACCCR_FI-QSFBT LT 0
      OR ACCCR_FI-QSSHB LT 0
      OR ACCCR_FI-WMWST LT 0
      OR ACCCR_FI-GBETR LT 0.
    check whether any of the above field value is less than 0.
    Thanks & Regards,
    Hegal K Charles

  • Copying large amount of data from one table to another getting slower

    I have a process that copies data from one table (big_tbl) into a very big archive table (vb_archive_tbl - 30 mil recs - partitioned table). If there are less than 1 million records in the big_tbl the copy to the vb_archive_table is fast (-10 min), but more importantly - it's consistant. However, if the number of records is greater than 1 million records in the big_tbl copying the data into the vb_archive_tbl is very slow (+30 min - 4 hours), and very inconsistant. Every few days the time it takes to copy the same amount of data grows signicantly.
    Here's an example of the code I'm using, which uses BULK COLLECT and FORALL INSERST to copy the data.
    I occasionally change 'LIMIT 5000' to see performance differences.
    DECLARE
    TYPE t_rec_type IS RECORD (fact_id NUMBER(12,0),
    store_id VARCHAR2(10),
    product_id VARCHAR2(20));
    TYPE CFF_TYPE IS TABLE OF t_rec_type
    INDEX BY BINARY_INTEGER;
    T_CFF CFF_TYPE;
    CURSOR c_cff IS SELECT *
    FROM big_tbl;
    BEGIN
    OPEN c_cff;
    LOOP
    FETCH c_cff BULK COLLECT INTO T_CFF LIMIT 5000;
    FORALL i IN T_CFF.first..T_CFF.last
    INSERT INTO vb_archive_tbl
    VALUES T_CFF(i);
    COMMIT;
    EXIT WHEN c_cff%NOTFOUND;
    END LOOP;
    CLOSE c_cff;
    END;
    Thanks you very much for any advice
    Edited by: reid on Sep 11, 2008 5:23 PM

    Assuming that there is nothing else in the code that forces you to use PL/SQL for processing, I'll second Tubby's comment that this would be better done in SQL. Depending on the logic and partitioning approach for the archive table, you may be better off doing a direct-path load into a staging table and then doing a partition exchange to load the staging table into the partitioned table. Ideally, you could just move big_tbl into the vb_archive_tbl with a single partition exchange operation.
    That said, if there is a need for PL/SQL, have you traced the session to see what is causing the slowness? Is the query plan different? If the number of rows in the table is really a trigger, I would tend to suspect that the number of rows is causing the optimizer to choose a different plan (with your sample code, the plan is obvious, but perhaps you omitted some where clauses to simplify things down) which may be rather poor.
    Justin

  • Reconciled amounts appears in vendor aging report

    Dear all,
    When we take report for reconciled transactions,reconciled amounts appears in
    aging days(Suppose we have ap invoice in 0-30 days and payments made in 30-60 days it's
    already reconciled)
    Why it is showing in reconciled aging report ?
    Jeyakanthan

    Dear,
    would like to recommend you to refer to the following note: 1228363-Business Partner Balance does not match journal entries。
    In any SAP Business One version lower than 2007, 2 separate
    reconciliation engines were employed. One reconciliation engine worked
    on the marketing document level, where partial reconciliation was
    supported & the other worked on the journal entry level, where only
    complete reconciliation was supported.
    This duality led in some instances to reconciliation inconsistencies
    which were initially highlighted with SAP Business One version 2004
    patch level 29, where the red message 'Business Partner balance does not
    match journal entries' was displayed when running the ageing report by
    sales documents & inconsistencies were detected.
    In the year end closing guides (available for download from the
    Documentation Resource Centre) SAP have always recommended to run the
    ageing reports by journal posting.
    In 2006, SAP published note 752261, now retired, documenting the system
    behaviour & offering a series of 'Select' queries to aid in
    reconciliation inconsistency analysis. These queries were an excellent
    tool but now SAP can offer a faster, better & more accurate solution.
    In SAP Business One 2007 A & B the reconciliation engines have been
    unified & partial reconciliation on journal entry level is now fully
    supported. Any database that is upgraded from a lower version undergoes
    a complex series of reconciliation upgrade algorithms, where any
    reconciliation inconsistency is identified & a reconciliation upgrade
    (RU) journal entry is automatically created. This RU journal does not
    document any accounting transaction, but simply creates an update in the
    'Balance Due' column, thus notionally re-openeing previously
    inconsistently reconciled transactions.
    These RU journals are then available for internal reconciliation.
    Please consult the 2007 Internal Reconciliation Upgrade(IRU)Landing
    Page:
    Channel Partner Portal -> Solutions -> SAP Business One -> Hot Topics ->
    SAP Business One 2007 Information Center -> New Single Reconciliation
    Engine- A single engine reconciles the difference and eliminates the
    previous reconciliation issue. More
    Click on 'More' to be directed to the IRU Landing Page.
    Here you will find 'How-to-Guides' in all supported languages, links to
    Expert Training Sessions & links to other pertinent information.
    You will also find available for download 2007 A & B upgrade simulation
    tools. SAP recommends to subject a database that is planned to be
    upgraded
    This upgrade simulation tool employs all internal reconciliation upgrade
    algorithms & the application will display in the 'Internal
    Reconciliation Upgrade Audit Trail Report' if a database is affected by
    inconcistencies.
    Hence, if the message 'Business Partner Balance does not match Journal
    Entries' is encountered, proceed as follows:
    Option 1:
    Upgrade database to SAP Business One 2007 & internally reconcile any RU
    journals.
    Option 2:
    Go to the IRU landing page & download the 'IRU pre-upgrade tool' either
    for version 2007A or 2007B, depending on your localisation. Follow the
    installation instructions & after successful installation & simulated
    upgarde of the database, analyse any inconsistencies appearing with the
    help of the How-to-guide. Should a type of inconsistency not have a drop
    down arrow & be described as 'Balancing upgrade journal transaction',
    the comapny accountant may decide to write it off, should the amount be
    insignificant or, if the amount is not insignificant, please log a
    support ticket with the component:
    SBO-ADM-UT-IAT
    Step by step detailed instructions including screenshots of how to
    install the sim-tool & what it does can be found on this wiki page:
    https://wiki.sdn.sap.com/wiki/display/B1/BPaccountBalancedoesnot+matc
    chJournalEntries
    Thanks & Regards
    Apple

  • Inconsistency in GL account

    Good day sir/madam,
    I have a problem of GL account inconsistency. After merging company 2 years ago, the company did not use a company code XXX any longer. Along last year (2006), we did not post any amount to GL account 900000 (profit and loss) in any period that caused all the entries in trx FS10N blank. Somehow after closing the year of 2006 and opening the year of 2007, I found the amount of -768.200 IDR from balance carry forward line and all the other period of year 2007. I have no idea where the amount came from. Could you give me some info to investigate this problem so that we can find the activity that caused this problem happened.
    Thank you for your support.
    Regards.

    Hi,
    Try rerunning F.16 in test run/normal run and see if it shows again.
    Rgds.

  • Unable to Change Withholding Tax Base Amount while creating Service AP Invoice through DI API?

    Dear All,
    I am trying to create Service AP Invoice through DI API.
    If I post the document without changing SAPPurchaseInvoice.WithholdingTaxData.TaxableAmount the dount ocument is created in SAP without any problem.
    But if I change amount in above field then DI API throws error Unbalanced Transaction.
    If I post same document in SAP with changed base amount it got posted in SAP without any Issue.
    Where I am doing wrong?
    please guide.
    Using:
    SAP B1 version 9 Patch Level 11
    Location : India.
    Thanks.

    Hi ,
    maybe you can find solution to these note 1812344
    1846344  - Overview Note for SAP Business One 8.82 PL12
    Symptom
    This SAP Note contains collective information related to upgrades to SAP Business One 8.82 Patch Level 12 (B1 8.82 PL12) from previous SAP Business One releases.
    In order to receive information about delivered patches via email or RSS, please use the upper right subscription options on http://service.sap.com/~sapidp/011000358700001458732008E
    Solution
    Patch installation options:
    SAP Business One 8.82 PL12 can be installed directly on previous patches of SAP Business One 8.82
    You can upgrade your SAP Business One to 8.82PL12 from all patches of the following versions:8.81; 8.8; 2007 A SP01; 2007 A SP00; 2007 B SP00; 2005 A SP01; 2005 B
    Patch content:
    SAP Business One 8.82 PL12 includes all corrections from previous patches for releases 8.82, 8.81, 8.8, 2007, and 2005.
    For details about the contained corrections, please see the SAP Notes listed in the References section.
    Notes: SAP Business One 8.82 PL12 contains B1if version 1.17.5
    Patch download:
    Open http://service.sap.com/sbo-swcenter -> SAP Business One Products -> Updates -> SAP Business One 8.8 -> SAP BUSINESS ONE 8.82 -> Comprised Software Component Versions -> SAP BUSINESS ONE 8.82 -> Win32 -> Downloads tab
    Header Data
    Released On
    02.05.2013 02:34:18  
    Release Status
    Released for Customer  
    Component
    SBO-BC-UPG Upgrade  
    Priority
      Recommendations/additional info  
    Category
      Upgrade information  
    References
    This document refers to:
      SAP Business One Notes
    1482452
    IN_Wrong tax amount was created for some items in the invoice with Excisable BOM item involves
    1650289
    Printing Inventory Posting List for huge amount of data
    1678528
    Withholding amount in the first row is zeroed.
    1754529
    Error Message When Running Pick and Pack Manager
    1756263
    Open Items List shuts down on out of memory
    1757641
    Year-end closing
    1757690
    SEPA File Formats - New Pain Versions
    1757898
    Incoming Bank File Format
    1757904
    Outgoing Bank File Format
    1762860
    Incorrect weight calculation when Automatic Availability Check is on
    1770690
    Pro Forma Invoice
    1776948
    Calendar columns are wrong when working with Group View
    1780460
    OINM column description is not translated
    1780486
    UI_System crash when you set extreme value of double type to DataTable column
    1788256
    Incorrect User-Defined Field displayed in a Stock Transfer Request
    1788372
    ZH: 'Unacceptable Field' when export document to word
    1788818
    RU loc: No freight in the Tax Invoice layout
    1790404
    Cash Flow Inconsistency when Canceling Payment
    1791295
    B1info property of UI API AddonsInstaller object returns NULL value
    1791416
    Adding a new item to BoM is slow
    1794111
    Text is overlapping in specific localization
    1795595
    Change log for item group shows current system date in all the "Created" fields
    1797292
    Queries in alerts should support more query results
    1800055
    B1if_ Line break issue in inbound retrieval using JDBC
    1802580
    Add Journal Voucher to General Ledger report
    1803586
    Not realized payment is exported via Payment Engine using 'SAPBPDEOPBT_DTAUS' file format
    1803751
    Period indicator of document series can be changed although it has been used
    1804340
    LOC_BR_Cannot update Nota Fiscal Model
    1805554
    G/L Account displayed in a wrong position when unticking the checkbox "Account with Balance of Zero"
    1806576
    Payment Cannot Be Reconciled Internally
    1807611
    Cannot update UDF in Distribution Rule used in transactions
    1807654
    Serial No./Batch inconsistency by canceled Inventory Transfer
    1808694
    BR: Business Partner Code cannot be updated with CNPJ CPF error
    1809398
    CR_Cannot Display Related Multi-Value Parameters
    1809758
    Arrow key not work for Batch/Serial Number Transactions Report
    1810099
    Tax Amount is Recalculated Even if Tax Code Is Not Changed
    1811270
    Upgrade fails on Serial And Batches object with error code -10
    1811846
    Cannot run Exchange Rate Differences when multi branch is activated
    1812344
    Withholding Tax Amount Is Not Updated in Payment Once Witholding Tax Code Is Changed in Document through DI API
    1812740
    DI:"Operation Code" show wrong value when add "A/P Tax Invoice" based on "A/P Invoice"
    1813029
    US_Vendor address on 1099 Summary by Form/Box Report is not updated according to the latest Invoice
    1813835
    Wrong amounts of Goods Return in Open Item List
    1814207
    Preliminary page prints setting does not keep after upgrade
    1814860
    Value "Zero" cannot be imported to "Minimum Inventory Level" field via Excel file
    1815535
    RFQ: Web front end not displayed in supplier language
    1815810
    GT: Adding Incoming Payment for Some Cash Flow Relevant Accounts Fails
    1816191
    BR:System Crashes While Working with Tax Code Determination Window
    1816611
    CR_Crystal Report Displayed Incorrectly Afte

  • Items have not been activated due to inconsistent withholding tax info

    Hi, expert
    I want to make  vendor payment  through F-53 after fill the detail when i click on open item process & double click on amount it gives an error 1 items have not been activated due to inconsistent withholding tax info
    regards
    gk

    Dear Kumar
    The relevant withholding tax types change, for example, as a result of legal changes. A withholding tax type is relevant if the following conditions apply:
    1. The withholding tax type is entered in the SAP master record of the business partner and indicated as subject to withholding    tax (vendor).
    2. The company code is indicated as authorized to deduct tax for vendors or as subject to withholding tax for customers for this withholding tax type (withholding tax data for company code: View V_T001WT)
    3. The posting date is in the validity period for the deduction authorization (withholding tax data for company code View V_T001WT).
    Regards,
    Chintan Joshi.

  • Database Inconsistency Nightmare

    As an avid user and supporter of Aperture since it's release, I now have an new issue. I am a pro photojournalist using Aperture every day for all my magazine assignment work and I feel like I know the program very well after 2+ years of heavy daily use.
    I have never had any serious problems with the program at all, until one week ago.
    The problems started when I launched aperture and got a database inconsistency dialog box. I was not too worried as I have had this once or twice in the past and I just relaunched the app with apple and option keys down and rebuilt the database. Everything seemed ok but then I noticed that 2,400 of my 12,000 referenced images were tagged as missing. I thought this was strange, and I used "Manage Referenced Files" to see what was going on and reconnect the missing files. I also noticed that 2 subjects had a second version not in a stack that contained the thumbnail for each image without metadata, while the normal version and stack with metadata had no thumbnail! On my FireWire drive 1 which holds a first copy of all referenced files in my library, each subject has a corresponding folder and I started to reconnect the offline and missing files. For most of the subjects everything went fine, but then there were 4 subjects where I encountered very unexpected problems. Firstly the names of the Master images for some subjects had been changed more that one year earlier, but the reconnect dialog box showed the original filename from the camera as the master filename. It did not want to "Reconnect All" so the only way is one by one!
    Then I notice that the smart folders set to show "Offline" and "Missing" masters are inaccurate for some reason.
    I even have a few "Managed" masters that will not export because the dialog box comes up to tell me that the image is "Offline" or "Unavailable". What? Ok so at this point I decide that since I have everything backed up on two separate Firewire drives and a server in California, that it may just be cleaner to reinstall Aperture and revert to a recent Vault. I did this and guess what, same problems plus a few more headaches reconnecting masters!
    The biggest problem at this point is that for two subjects, the captions and adjustments are in a version that has no thumbnail, shows the "Missing" tag in the Metadata panel listing the original filename from the camera as the Master Filename, while there is a new mysterious version 2 which has a thumbnail, no caption, the offline tag and and shows the correct Master Filename. If I reconnect the masters for these images it still shows as missing for the normal version which has the captions, keywords and adjustments.
    At this point I have decided that for two of the subjects One is Versailles 600 images and the other Cyprus 700 images, that I will probably end up having to just reimport these two subjects and manually move the caption for each image.
    This is all starting to make me call into question the stability of this type of system for archiving large amounts of images. True that with all the backups and the normal robustness of Aperture you should not loose images. But if the Masters get unlinked in a corrupt way from the database you CAN loose a large part of your post processing work. And the worst seams to be that this can happen even if you DO know what you are doing.
    This does not at all mean I am going to stop using Aperture, but it does make me feel a little more weary from an archival standpoint about keeping all metadata and adjustments separate from master images.

    Thanks for all your replies. And yes the Project container is a fantastic way to back things up. It is especially useful for moving a project from one Library to another without risk of overwriting files.
    Naturally if I could get my corrupt projects in order I would export the projects, including the masters and then reimport them. But Aperture is not going to let me do that with these corrupt Projects, as the masters are not be properly linked. I have tried everything I could think of.
    There are still some really weird things going on with the database in my library. One really strange thing is that some referenced images that DO have the master linked and show the normal reference tag, but will not Consolidate as the dialog comes up "The selection does not contain any referenced files".
    But then for the same image you can choose "Show in finder" and it goes straight to the master on it's external drive! How strange is that. For that matter it will also not "Relocate Master" for the image because it says that the master is "Offline or cannot be found". OK then here comes the weirdest part: It will export the image without a hitch! Managed referenced files shows these images as connected and black.
    Then the other really bad thing is that some of the managed images will not export or relocate as the dialog comes up "Offline or cannot be found". I have rebuilt the library and everything else to no avail.
    The thing that bothers me about this is that if this can happen to me, who spends untold hours on Aperture and on the computer in general and I believe I do know quite a lot about the program, what is going to happen to the many photographers who do not fully grasp the abstract file management of Aperture?
    Maybe it is more stable to run your Library as Managed, but for working photographers who are usually laptop users it is almost essential to keep the masters on an external Ruggged drive. If you put the Library on the external drive the performance which is already sometimes sluggish when you need to work very quickly (even with the most powerful MacBook Pro) becomes too beach ball when on an external drive.
    In any event I think that if I can ever get my Library back in order, in addition to the many backups of masters and vaults, I am going to export each project including masters and back that up too on a regular basis. Seems like a lot of work backing up to maintain the integrity of a database!
    Cheers

  • Inconsistent datatypes:expected - got - error in handling xml

    hi i am getting the error Error(45,12): PL/SQL: ORA-00932: inconsistent datatypes: expected - got - in this procedure
    i tried a lot and landed in confused state..
    create or replace
    PROCEDURE BT_CPE_XML_READ1 IS
    dest_clob CLOB;
    src_clob BFILE := BFILENAME('DOC_PATH', 'tester.xml');
    dst_offset number := 1 ;
    src_offset number := 1 ;
    lang_ctx number := DBMS_LOB.DEFAULT_LANG_CTX;
    warning number;
    ex number;
    v_cast xmltype;
    v_varchar varchar2(32767);
    BEGIN
    DBMS_LOB.CREATETEMPORARY(dest_clob,true);
    ex := dbms_lob.fileexists(src_clob);
    if ex = 1 then
    INSERT INTO test_clob(id, file_name, XML_FILE_COLUMN, timestamp)
    VALUES(1001, 'test.xml', empty_clob(), sysdate)
    RETURNING XML_FILE_COLUMN INTO dest_clob;
    DBMS_LOB.OPEN(src_clob, DBMS_LOB.LOB_READONLY);
    DBMS_LOB.LoadCLOBFromFile(
    DEST_LOB => dest_clob
    , SRC_BFILE => src_clob
    , AMOUNT => DBMS_LOB.GETLENGTH(src_clob)
    , DEST_OFFSET => dst_offset
    , SRC_OFFSET => src_offset
    , BFILE_CSID => DBMS_LOB.DEFAULT_CSID
    , LANG_CONTEXT => lang_ctx
    , WARNING => warning
    DBMS_OUTPUT.ENABLE(100000);
    DBMS_LOB.CLOSE(src_clob);
    COMMIT;
    DBMS_OUTPUT.PUT_LINE('Loaded XML File using DBMS_LOB.LoadCLOBFromFile: (ID=1001).');
    v_cast :=xmltype(dest_clob);
    --dbms_output.put_line(v_cast);
    select extractvalue(XML_FILE_COLUMN,'/modifyProductPortfolioRequest/ns1:stateCode') from test_clob;
    end if;
    END BT_CPE_XML_READ1;
    is there any other way to get the value(xml)from the clob column of a table

    I see two issues off a quick eye-ball of the code<br><br>
    #1) You insert an empty clob into your table, load a local variable with the XML from disk, and then query the table. You never insert the XML into the table so you are querying on an empty column in the table.<br><br>
    #2) extractValue allows a third parm which is the namespace string. Since your XPath contains ns1: you'll need to use the third parm

  • Inconsistent datatypes: expected - got -

    hi i am getting the error Error(45,12): PL/SQL: ORA-00932: inconsistent datatypes: expected - got - in this procedure
    i tried a lot and landed in confused state..
    create or replace
    PROCEDURE BT_CPE_XML_READ1 IS
    dest_clob CLOB;
    src_clob BFILE := BFILENAME('DOC_PATH', 'tester.xml');
    dst_offset number := 1 ;
    src_offset number := 1 ;
    lang_ctx number := DBMS_LOB.DEFAULT_LANG_CTX;
    warning number;
    ex number;
    v_cast xmltype;
    v_varchar varchar2(32767);
    BEGIN
    DBMS_LOB.CREATETEMPORARY(dest_clob,true);
    ex := dbms_lob.fileexists(src_clob);
    if ex = 1 then
    INSERT INTO test_clob(id, file_name, XML_FILE_COLUMN, timestamp)
    VALUES(1001, 'test.xml', empty_clob(), sysdate)
    RETURNING XML_FILE_COLUMN INTO dest_clob;
    DBMS_LOB.OPEN(src_clob, DBMS_LOB.LOB_READONLY);
    DBMS_LOB.LoadCLOBFromFile(
    DEST_LOB => dest_clob
    , SRC_BFILE => src_clob
    , AMOUNT => DBMS_LOB.GETLENGTH(src_clob)
    , DEST_OFFSET => dst_offset
    , SRC_OFFSET => src_offset
    , BFILE_CSID => DBMS_LOB.DEFAULT_CSID
    , LANG_CONTEXT => lang_ctx
    , WARNING => warning
    DBMS_OUTPUT.ENABLE(100000);
    DBMS_LOB.CLOSE(src_clob);
    COMMIT;
    DBMS_OUTPUT.PUT_LINE('Loaded XML File using DBMS_LOB.LoadCLOBFromFile: (ID=1001).');
    v_cast :=xmltype(dest_clob);
    --dbms_output.put_line(v_cast);
    select extractvalue(XML_FILE_COLUMN,'/modifyProductPortfolioRequest/ns1:stateCode') from test_clob;
    end if;
    END BT_CPE_XML_READ1;
    is there any other way to get the value(xml)from the clob column of a table

    I see two issues off a quick eye-ball of the code<br><br>
    #1) You insert an empty clob into your table, load a local variable with the XML from disk, and then query the table. You never insert the XML into the table so you are querying on an empty column in the table.<br><br>
    #2) extractValue allows a third parm which is the namespace string. Since your XPath contains ns1: you'll need to use the third parm

  • Data Protection Manager 2012 - Inconsistent when backing up Deduplicated File Server

    Protected Server
    Server 2012 File Server with Deduplication running on Data drive
    DPM Server
    Server 2012
    Data Protection Manager 2012 Service Pack 1
    We just recently upgraded our DPM server from DPM 2010 to DPM 2012 primarily because it is supposed to support Data Deduplication. Our primary File server that holds our home directories etc. is limited on space and was quickly running low so just after
    we got DPM 2012 in place we optimized the drive on the file server which compressed the data about 50%. Unfortunately shortly after enabling deduplication the protected shares on the deduplicated volume are getting a Replica is Inconsistent error.
    I continually get Replica is Inconsistent for the Server that has deduplication running on it. All of the other protected servers are being protected as they should be. I have run a consistency check multiple times probably about 10 times and it keeps going
    back to Replica is inconsistent. The replica volume shows that it is using 3.5 TB and the Actual protect volume is 4TB in size and has about 2.5 TB of data on it with Deduplication enabled.
    This is the details of the error
    Affected area:   G:\
    Occurred since: 1/12/2015 4:55:14 PM
    Description:        The replica of Volume G:\ on E****.net is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with
    consistency check. You can recover data from existing recovery points, but new recovery points cannot be created until the replica is consistent.
    For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106)
    More information
    Recommended action: 
    Synchronize with consistency check.
    Run a synchronization job with consistency check...
    Resolution:        
    To dismiss the alert, click below
    Inactivate
    Steps taken to resolve - I’ve spent some time doing some searches and haven’t found any solutions to what I am seeing. I have the data deduplication role installed on the DPM server which has been the solution for many people seeing similar issues. I have
    also removed that role and the added it back. I have also removed the protected server and added it back to the protection group. It synchronizes and says consistent then after a few hours it goes back to inconsistent. When I go to recovery it shows that I
    have recovery points and it appears that I can restore but because the data is inconstant I don’t feel I can trust the data in the recovery points. Both the protected server and the DPM servers’ updates are managed via a WSUS server on our network.
    You may suggest I just un-optimize the drive on the protected server however after I have optimized the drive it takes a large amount more of space to un-optimize it (Anyone know why that is) anyways the drive isn’t large enough to support un-optimization.
    If anyone has any suggestions I would appreciate any help. Thanks in advanced.

    Ok I ran a consistency check and it completed successfully with the following message. However after a few minutes of it showing OK it now shows Replica is Inconsistent again.
    Type: Consistency check
    Status: Completed
    Description: The job completed successfully with the following warning:
     An unexpected error occurred while the job was running. (ID 104 Details: Cannot create a file when that file already exists (0x800700B7))
     More information
    End time: 2/3/2015 11:19:38 AM
    Start time: 2/3/2015 10:34:35 AM
    Time elapsed: 00:45:02
    Data transferred: 220.74 MB
    Cluster node -
    Source details: G:\
    Protection group members: 35
     Details
    Protection group: E*
    Items scanned: 2017709
    Items fixed: 653
    There was a log for a failed synchronization job from yesterday here are the details of that.
    Type: Synchronization
    Status: Failed
    Description: The replica of Volume G:\ on E*.net is not consistent with the protected data source. (ID 91)
     More information
    End time: 2/2/2015 10:04:01 PM
    Start time: 2/2/2015 10:04:01 PM
    Time elapsed: 00:00:00
    Data transferred: 0 MB
    Cluster node -
    Source details: G:\
    Protection group members: 35
     Details
    Protection group: E*

Maybe you are looking for