Check logic

My requirment is as follows,
When the warranty bill comes to us it has following 3 fields along with the others.
1.SerialNO , Oldimei, Newimei.The warranty period is calculated accroding to the field SerialNo.
2. in case of moblies ph. if the PCB is repalced the imei no and the serial no changes.So ,now if changed pcb is again damaged and is to be replace for the same mobile ,the warranty period is calculated form the first changed pcb so the same mobile gets new warranty period.And this goes on.
3. So now,in the bill we aske the clients to enter the field "Newimei" if the pcb is replaced.
4.If the field "newimei" is present in the bill i update the data for the customer in one "z" table.
the fields for these are "customer,company,serialno,oldimeino,newimeino,bill no,model".
5.now refer to the eg:
  oldimei   newimei
1. aaa        bbb
2. bbb       ccc
3. ccc
Now whenever the bill comes i need to check that the "oldimei" in 3(ccc) on the bill exist in the "z" table as the "newimei" in 2(ccc).if yes check if the oldimei for this 2(bbb) exist as the new in the "z" table as in 1(bbb) .if yes keep checking till it is noo available as "newiimei" in the record in the table .If no i need to capture that record and get the "oldimei" for the same wherby i get the old serilano  to calculate the warranty period.\So how do i achieve this in a loop?
4.Also this serila no is being used in the complete warranty check logic further in the exisitng code so i need to use this "captuerd old serial no for all the calculations which is not just in one place and also the same gets saved in the master table where the bill data is saved.But while saving i want to save the existing new serial no on the bill on the captured one.How do i go?
Please do guide.

Hi,
if the old and new imei no's are same identity u can try this
assume in ur internal table (it) records are like this
customer company serino oldimeino newimeino billno mobel
2            1000        1         aaa         bbb          1        i1
2            1000        2         bbb         ccc          2        i1
2            1000        3         ccc                          3        i1
declare two internal tables(it1,it2) same as it.
refresh:it1,it2.
it1[] = it[].
sort it by customer oldimeino.
delete adjacent dupliactes from it comparing customer oldimeino.
sort it1 by newimeino.
delete adjacent dupliactes from it1 comparing customer newimeino.
loop at it.
read table it1 with key customer  = it-customer
newimeino = it-oldimeino.
if sy-subrc ne 0.
move-corresponding it to it2.
append it2.
endif.
clear:it,it2,it1.
endlloop.
now in it2 we have records with serino's & oldimeino which is not having any newimeino.
based on this internal table it2 u can  update the data in master table.

Similar Messages

  • Backup validate check logical database

    What is exactly the following RMAN command do? I want to know Is it doing a full backup or not?
    rman> backup validate check logical database' . Kindly help me.

    RMAN does not physically backup the database with this command. But it reads all blocks and checks for corruptions.
    If it finds corrupted blocks it will place the information about the corruption into a view:
    v$database_block_corruption;
    Now we can tell RMAN to recover all the blocks which it has found as being corrupt:
    RMAN> blockrecover corruption list; # (all blocks from v$database_block_corruption)
    Use below link for reference.
    http://luhartma.blogspot.com/2006/04/how-to-check-for-and-repair-block.html
    -Bharath

  • BACKUP VALIDATE vs VALIDATE in checking logical/physical corruption

    Hello all,
    I am checking if our 10gR2 database has any physical or logical corruption. I have read in some places where they state that VALIDATE command is enough to check database for physical corruption. Our database was never backed up by RMAN specifically before. Are any configuration settings needed for running BACKUP VALIDATE command? The reason I am asking is because just the VALIDATE command returns an error and BACKUP VALIDATE command runs without error but it is not showing the
    "File Status Marked Corrupt Empty Blocks Blocks Examined High SCN" lines.
    I used the command in two different formats and both do not show individual data file statuses:
    RMAN> run {
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    CONFIGURE DEVICE TYPE DISK PARALLELISM 10 BACKUP TYPE TO BACKUPSET;
    BACKUP VALIDATE CHECK LOGICAL DATABASE FILESPERSET=10;
    RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE
    RMAN> VALIDATE DATABASE;
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-00558: error encountered while parsing input commands
    RMAN-01009: syntax error: found "database": expecting one of: "backupset"
    RMAN-01007: at line 1 column 10 file: standard input
    However on a different database already being backed up by RMAN daily, BACKUP VALIDATE output shows list of datafiles and STATUS = OK as below:
    List of Datafiles
    =================
    File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
    How can we check every individual datafile status. Appreciate your responses. Thanks.

    Hi,
    After you have run:
    BACKUP VALIDATE CHECK LOGICAL DATABASE You can use sqlplus and run:
    select * from v$database_block_corruption.The output will tell you which block in which datafile is corrupt.
    Regards,
    Tycho
    Edited by: tychos on 8-sep-2011 18:34

  • Backup validate check logical

    hi,
    An mssql dba asked me if oracle could do logical/physical corruption checking when performing backups. Im aware of BACKUP VALIDATE CHECK LOGICAL.
    However, it doesn't create any backup. It just checks for corruption. If I remember, you can run BACKUP CHECK LOGICAL which will actually create the backup and check logical at the same time, but my question is...
    Why would you not want one or both of these as standard? I get it that there's overhead, more so for VALIDATE, but would you really want backups that contain corruption? Even if you keep 30 days worth of backups, you could realise after 40 days that you had corruption, and not be able to restore.
    Or is the reasoning behind not making it default, that you've got 'enough' backups that even if a datafile gets logical/physical corruption, you're bound to discover it before your backups become obsolete, and that it's then not worth the overhead of making it default?

    You can have the settings with SET MAXCORRUPT FOR DATAFILE.
    By default a checksum is calculated for every block read from a datafile and stored in the backup or image copy. If you use the NOCHECKSUM option, then checksums are not calculated. If the block already contains a checksum, however, then the checksum is validated and stored in the backup. If the validation fails, then the block is marked corrupt in the backup.
    The SET MAXCORRUPT FOR DATAFILE command sets how many corrupt blocks in a datafile that BACKUP will tolerate. If a datafile has more corrupt blocks than specified by the MAXCORRUPT parameter, the command terminates. If you specify the CHECK LOGICAL option, RMAN checks for logical and physical corruption.
    By default, the BACKUP command terminates when it cannot access a datafile. You can specify parameters to prevent termination, as listed in the following table.
    If you specify the option ... Then RMAN skips...
    SKIP INACCESSIBLE Inaccessible datafiles. A datafile is only considered inaccessible if it cannot be read. Some offline datafiles can still be read because they exist on disk. Others have been deleted or moved and so cannot be read, making them inaccessible.
    SKIP OFFLINE Offline datafiles.
    SKIP READONLY Datafiles in read-only tablespaces.

  • Backup blocks all check logical validate database

    Hello,
    From Metalink : How To Use RMAN To Check For Logical & Physical Database Corruption [ID 283053.1]
    The following example shows how to validate all datafiles:
    run {
    allocate channel d1 type disk;
    backup blocks all check logical validate database;
    release channel d1;
    Does this following command just check for corruption , or is it actually trying to backup the datafiles to the disks ?
    Thanks

    Hello,
    The BACKUP VALIDATE statement is used for checking Block Corruption not for running a Backup.
    You'll have more detail on the following links:
    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmvalid.htm#CHDECGBJ
    http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/rcmvalid.htm#CHDCEHFD
    Hope this help.
    Best regards,
    Jean-Valentin

  • Parameter ' check logical ' question

    Hello,
    in the rman backup and recovery reference 10gR2, it says for the check logical parameter for backup:
    If the initialization parameter DB_BLOCK_CHECKSUM=TRUE, and if MAXCORRUPT and NOCHECKSUM are not set, then specifying CHECK LOGICAL detects all types of corruption that are possible to detect.
    But there is no description for the alternative of DB_BLOCK_CHECKSUM=FALSE.
    Now Ct. wants to know: What does check logical check if this parameter is not set to true?
    Ct. wants to check all kinds of corruption which is checkable by rman.
    regards
    Ulli

    Without the CHECK LOGICAL clause, RMAN will perform 2 types of validation in the cache layer of the datablock: block checksum and head/tail verification. When you add CHECK LOGCIAL clause, we perform additional checks in the transaction and data layer of the block which will add to the the processing time.
    However, a delay of 3.2 times seems a bit of higher side. You might want to check for the CPU/IO utilization on the machine. If you have idle CPU, you may consider adding more channels. Look out for v$backup_async_io.LONG_WAITS, if the number is high, IO is slow.
    It is also possible that due to the additional checks, RMAN is not able to fill the output buffer as fast as earlier. In that case, consider increasing multiplexing (FILESPERSET / MAXOPENFILE)

  • How to check logical systems

    Hi Experts,
    What is the best way to check what are the logical systems connected to ECC and what are the logical systems connected to APO?
    Regards
    Manotosh

    Hi
    You can check the connections with TCode: RSA1 (datawarehouse workbench). Go to section Source system & see the systems connected to APO.
    As adviced by Vinod, You can check CIF connections between APO & ECC systems in /SAPAPO/CC.
    In R/3 you can check the connected system by Tcode: SM59 & you can see the connection to APO in /SAPAPO/CC.
    Hope this helps.
    Regards,
    Nawanit

  • Authority-check logic

    Hi ppl,
    I have a sales org field on the selection screen for which I have to check the authorization.
    The field name is s_vkorg (as a select-option).
    I am using the below logic to check for the sales org authorization:
      AUTHORITY-CHECK OBJECT 'V_VBAK_VKO'
               ID 'VKORG' FIELD s_vkorg
               ID 'VTWEG' DUMMY
               ID 'SPART' DUMMY
               ID 'ACTVT' FIELD '03'.
    My query is that will this logic check for all the sales orgs entered by the user OR should I loop at s_vkorg and write the logic as:
    loop at s_vkorg.
      AUTHORITY-CHECK OBJECT 'V_VBAK_VKO'
               ID 'VKORG' FIELD s_vkorg-low
               ID 'VTWEG' DUMMY
               ID 'SPART' DUMMY
               ID 'ACTVT' FIELD '03'.
    endloop.
    Please help.
    Thanks,
    Dawood.

    u have to do authorisation check aftr u have fetched all the sales organisation into one internal table..
    like..
    select vkorg from tvko
                       into table it_vkorg
                       where vkorg in s_vkorg.
    loop at it_vkorg into wa_vkorg.
    AUTHORITY-CHECK OBJECT 'V_VBAK_VKO'
    ID 'VKORG' FIELD wa_vkorg-vkorg
    ID 'VTWEG' DUMMY
    ID 'SPART' DUMMY
    ID 'ACTVT' FIELD '03'.
    endloop.

  • Custom authorization object and check logic

    Hi gurus,
    we need to apply additional authorization check in our custom reports.
    so i created a custom fields & object, and put the statement
          AUTHORITY-CHECK OBJECT 'ZHR_APP01' FOR USER uname
                   ID 'ZROLEID' FIELD '03'
                   ID 'ZSOBID'  FIELD zzdwbm.
    in a abap class method centrally, so it could be called by many reports.
    but the test show that the sy-subrc always set to 0, even for users without any authorization.
    what i missed for adding custom auth check?
    for this case, do i need to maintain authorization check indicator in SU24?
    what i am confused is that , su24, you have to maintain a transaction , but our authorization check is not for transaction , but for reports and bsp application, how should i maintain su24 for that?
    thanks and best regards.
    Jun

    Hi,
    I have created a Custom Authorization Object for HR named Z_ORIGIN (it has Personnel Subarea field BTRTL besides what's there in Auth. Object P_ORIGIN) and made it Check/Maintain for transaction PA30 in SU24.
    I can see the entries in the USOBT_C & USOBX_C tables for this object, I am also able to add this object in the roles as well.
    Everything looks fine, but when I execute the transaction  the object Z_ORIGIN is never checked (for a user having this object in his/her User Master). Only P_ORIGIN object is checked instead.
    We've ran the report RPUACG00 also which is mentioned in this thread.
    We also coded the authority check code in the both user exit ZXPADU01 and ZXPADU02 for PA infotype operations
    I believe I'll have to write some ABAP code e.g. AUTHORITY-CHECK OBJECT 'ZP_ORGIN' etc. Can anybody tell which User Exit or Field Exit I'll have to put the AUTHORITY-CHECK code in, so that my new custom authorization object is alwayz checked
    but still it is taking the P_ORGIN object.

  • Need to check logic

    Hi ,
    I have some doubt first I have take data from excel sheet check the existing record in excle fine and need to validate
    Edited by: anil kumar on Jan 22, 2008 5:06 PM

    Hi Arun,
    you can keep field validations on Page level or EO level. When it is field level, it will validate for input fields before leaving the page. When you set validation on EO level, it wil check when values are to be stored in the database.
    Switch off EO validation if you want to get it validated on the page itself. EO validation can provide an additional level of validation though.
    regards,
    Rajan

  • Sanity check - Logical column named "Month"

    Logical Table: Period
    Logical Column: Month
    Version: 10.1.3.3
    - Can create a dashboard prompt on this column
    - Can create a report filter on this column, but it only works for the month of September
    - Logical SQL looks fine (WHERE Period."Month" = 'August')
    Is this some sort of reserved word weirdness?
    When I do the same report based on "Month Abbreviation" (Aug) I get data.
    Thoughts?

    I have this model and it works fine.
    If you create a new condition based on that column, and click to generate a list of values to select one, are you seeing the value 'August' in there? Could there be a space at the end of the string? ie. 'August_'

  • Logical components problem when checking project

    Hello,
    I have searched in the forum for a solution, but the people who had the same kind of problem managed to solve it with means that do not work for me.
    Here is the situation :
    - We have a three tier landscape, DEV-QAS-PRD, in transport domain DOMAIN_A, with domain controller DCA
    - We have our Solution Manager system, DCB, which is part of (and domain controller of) transport domain DOMAIN_B
    - The two transport domains are linked, as specified in SolMan prerequisites for managing systems from other transport domains. This has been tested, and transports can be forwarded and imported freely from one domain to another.
    - I have created the systems, RFC destinations, and logical component for the three tier landscape in SolMan system DCB. The RFCs are all OK.
    - I have created the domain controller DCA system in SolMan DCB, with RFC destination to client 0, as defined in the prerequisite for Change Request Management.
    - Finally, I have created a project in SolMan system DCB, with this logical component.
    When I try to do a Check on this project, in the Change Request management tab, I have green lights everywhere, except for the following :
    - In "Check logical components" : No consolidation system found for DEV-040 (our development system)
    - In "Check consistency of project" : Message from function module /TMWFLOW/CHECK_PRJ_CONS: No export system for PRD-020 (our production system)
    We have checked the TMS settings and transport routes, and everything looks OK.
    Any ideas on what to check/try would be welcome.
    Thanks,
    Thomas

    Hi,
    Check the status of the DC's in the track created in CBS.
    at following location.
    http://url:port/webdynpro/dispatcher/sap.com/tc.CBS.WebUI/WebUI
    There should be no Broken DC's in the track.
    If every thing is fine here and you still have problems then check by repairing the class path of the component.
    Hope this helps you.
    Regards,
    Nagaraju Donikena
    Regards,
    Nagaraju Donikena

  • How to check Data Source extraction Logic

    Hi Experts
    Please explain me in details steps how/where can i check the logic of data sources
    We have data sources based on
    1) Custom data source based on Function Module  (Where can i check the code/logic....)
    2) Standard Business Content Data source (Where can i check Logic)
    3) Standard Business Content Data source which is enhanced to inculde ZZ fields (Where can i check Logic)

    1) Custom data source based on Function Module (Where can i check the code/logic....)
    Go to tcode RSO2, enter the generic DataSource name and click Display. In the next screen, it will show you the Function Module that that's used for the extraction of data. Copy that FM name, go to tcode SE37, past the FM name that you previously copied and then click on Display. This is where you can view the extraction logic for the generic DataSource based on a Function Module.
    2) Standard Business Content Data source (Where can i check Logic)
    Follow the same for display of generic DataSource source. On the initial screen, if you get an I-type message after clicking on Display, that message displays the name of the Function Module being used, otherwise hit Enter to get to the next screen.
    3) Standard Business Content Data source which is enhanced to inculde ZZ fields (Where can i check Logic)
    Transaction CMOD is used for the creation/maintenance of User Exits. In CMOD, use the dropdown to select the custom Project that's been defined on your source SAP application for BW extraction User Exits, and select the Components radio button. Click on Display. A list of INCLUDE programs will be shown. These INCLUDE programs each represent the four types of DataSources.
    EXIT_SAPLRSAP_001 - Transaction Data DataSources User Exit
    EXIT_SAPLRSAP_002 - Master Data Attribute DataSources User Exit
    EXIT_SAPLRSAP_003 - Master Data Text DataSources User Exit
    EXIT_SAPLRSAP_004 - Master Data Hierarchy DataSources User Exit
    You will have to know the type of data (e.g. Transaction, Master Data Attribute, etc) to know which INCLUDE to go into. Once you know which one to go into, double-click on it and this will bring into display mode on that INCLUDE. Another INCLUDE will be present that begins wth Z*. Double-click on that, and you should then be in the area where the logic is for determining where to go if the DataSource has a certain value and you should be able to get to the code to read through it from there.
    You can also get to these EXIT_SAPLRSAP_NNN programs via tcode SE38.

  • Logic to check batch number using user exit

    hi friends,
    I wanted to know if the below checking logic can be implemented in transaction MIGO at the time of good receipt. Is it possible to implement this checking logic using enhancements. if so are there any USER EXITS or BADI for this?
    PFB the checking logic
    Checking logic:
    Example: Existing Batch No. for a material is available as 10, 20, 30 and 50.  If the batch No. 40 is entered for any new GR, system should show an information message since the current batch number 40 is earlier than the last available batch ie., 50.
    thanks and regards,
    vinod

    Hi Vinod,
    You can implement the badi MRM_HEADER_CHECK if it is a header related data and we can raise the messages too.
    Hope it solves your issue.
    Siva

  • Questions on Logical corruption

    Hello all,
    My DB version is 10g+ - 11.2.0.3 on various different OS.  We are in process of deploying RMAN on our system and i am having a hard time on testing/get a grip around the whole logical corruption... from what i understand(please correct me if i am wrong)
    1. I can have a check logical syntax in my backup cmd(and that will check both physical and logical corruption)...But how much overhead dose it have, Seems to be anywhere from 14-20% overhead on backup time. 
    2. Leaving the maxCorrupt to default(which i beleive is 0)...if there is a physical corruption my backup will break and i should get an email/alert saying backup broke...
    3.  Would this be same for logical corruption too ??, would RMAN report logical corrution right away like physical corruption would do?  Or do i have to query v$database_block_corruption after backup is done to figure out if i have logical corruption
    4. how would one test logical corruption ?? (besides the NO_LOGGING operation, as our DB have force logging turned on)
    5. Is it a good practice to have check logical corruption in your daily backup? ( i guess i have no problems for it if DB are small, but some of our DB are close to 50TB+ and i think the check logical is going to increase the backup time significantly)
    6. If RMAN cannot repair logical corruption, then why would i want to do the check logical (besides knowing i have a problem and the end user have to fix it by reload the data...assuming its a table not index that is corrupt)..
    7. any best practices when it comes for checking logical corruption for DB in 50+ TB
    I have actually searched on here and on google, but i could not find any way to reproducing logical corrpution(maybe there is none), but i wanted to ask the community about it....
    Thank you in advance for your time. 

    General info:
    http://www.oracle.com/technetwork/database/focus-areas/availability/maa-datacorruption-bestpractices-396464.pdf
    You might want to google "fractured block" for information about it without RMAN.  You can simulate that by writing a C program to flip some bits, although technically that would be physical corruption.  Also see Dealing with Oracle Database Block Corruption in 11g | The Oracle Instructor
    One way to simulate is to use nologging operations and then try to recover (this is why force logging is used, so google corruption force logging).  Here's an example: Block corruption after RMAN restore and recovery !!! | Practical Oracl Hey, no simulate, that's for realz!
    Somewhere in the recovery docs it explains... aw, I lost my train of thought, you might get better answers with shorter questions, or one question per thread, for this kind of fora.  Oh yeah, somewhere in the docs it explains that RMAN doesn't report the error right away, because later in the recovery stream it may decide the block is newly formatted and there wasn't really a problem.
    This really is dependent on how much data is changing and how.  If you do many nologging operations or run complicated standby, you can run into this more.  There's a trade-off between verifying everything and backup windows, site requirements control everything.  That said, I've found only paranoid DBA's check enough, IT managers often say "that will never happen."  Actually, even paranoid DBA's don't check enough, the vagaries of manual labor and flaky equipment can overshadow anything.

Maybe you are looking for