Date Cutting

Hi All,
I have a Start Date And End Date, i would insert a row in the table with start date as sysdate and end date would be left blank, when i insert another row in the table, i would like to update the previous row end date with the sysdate-1, but what is the problem is if insert both the rows by today itself i'm having the problem.
Eg:
Start Date
24-Sep-2007 09:24:32
If this is the start date and i'm inserting a row now, i need to reduce the timestamp by one second and update in the end date.
Thanks in Advance,

what's the problem? sample code and/or error messages please?
btw, when you say +1 or -1 I assume you are talking about seconds, but to oracle, +1 would add a whole day, while -1 would subtract a whole day. so if you want one second ago you would say sysdate-1/24/60/60

Similar Messages

  • Data Cut

    Hi All,
    We have one main scenarios which has data and many rules along with journal automation rules. We have another scenario which we use for data cut,means on WD5 we run consolidation on another scenario to get data from main scenario at that point of time. So, this scenario is only for data cut.
    So, issue is that in HFM all rules should work in this scenario as well, if any rule is not applied in some way then data between them will not reconcile.. In HFM cant we have just an exact copy from one scenario to another like a screenshot of data when ever we want.
    Is it or same kind of thing is possible in HFM? Any suggestions plzzz...
    Thanks
    Ankush

    Hi,
    So, your scenario is functioning as a version ? after you play around with data in Scenario A and you are fine with the data, then you want to move the data into Scenario B, which shouldn't be changed anymore. Is it correct ?
    The problem is, when you move data from A to B, you need to re-run a rules file in order to have your full data.
    As suggested by Thanos, you may add in your rules file that you want to run this rules file for this scenario A and B. So, you won't have to worry to have 2 different kind of rules. Only 1 needed.
    Or, you create a trigger input by user, that when user input "1", then your rules will run a copy data (all level, including journal, proportion, elimination, etc) from A to B. In this case, you won't need to re-run a consolidation for scenario B. You will have an exact data copy (through rules file) to B for all level.
    Hope this help.
    Regards,
    Anna

  • 4G and Data cutting in and out

    I have had a Thunderbolt for about 3 mo now and about 4 days ago my phone stopped picking up 4G. I have used the phone info test and have it set correctly to pick up 4G. The data will also cut in and out if I am streaming video or playing games. I am more concerned about the 4G. It was running continuously from the day I bought the phone until 4 days ago. I have rebooted several times but will not go into 4G. Any help would be appreciated.

    Please reply and let me know if you are still having problems getting connected to 4G.
    When you rebooted, did you remove the battery & even the SIM card? Once you removed & reinserted, what happened?
    Is there anyone else with or around you that may also have a 4G device from Verizon? If so, are they also having problems picking up 4G?
    Have you also tried these steps from the phone?
    (Two work arounds provided to us from HTC are listed below)
    1.  From the home screen, select Applications.
    2.  Select Settings.
    3.  Select Wireless & network.
    4.  Select Mobile networks.
    5.  Select Use packet data.
    Enabled when a orange check mark is present.
    The next is..,
    1.  From the home screen, select and hold the status bar (located at the top of the display).
    2.  Drag the window shade to the bottom of the display.
    3.  Select Mobile data.
    Enabled when a orange indicator is present
    Let me know what happens. I want to ensure you are able to continue to Rule The Air with your 4G device.

  • How to avoid a line of data being cut by page jump ?

    Dear Java Experts,
    I have encountered a very troublesom problem!
    The problem is described as follows:
    When the browser (IE or Netscape) contains many HTML character data (especially HTML tables and data from DATABASE), how can I print the data to client's printer
    (i.e. browser's printer) without having a line of data cut by page jump?
    That is how to avoid a line of data cut by page jump, upper part of the data in one page and lower part in next!
    I think the problem should be solved with client's java applet!
    Can any expert give me sample code to solve the problem or indicate me where I can find documents about solving the problem?
    Thank you very much in advance!!
    Best Regards,
    Jackie Su

    HTML is not really meant for device-specific presentation (despite a lot of abuses in this area).
    I don't think you're going to solve this satisfactorily with HTML, no matter what you jury-rig around it.
    If this kind of presentation control is a requirement, you're probably better off with a format that directly supports this--PDF is the most obvious choice.
    There's a sourceforge project that supports PDF generation from Java (http://sourceforge.net/projects/itext/ - I don't have any experience with it) and I think there may be others.
    And I believe there are also commercial products for this.

  • Coredump when adding new data to a document

    Hi,
    I have managed to get a coredump when adding data to a document,
    initially using the Python API but I can reproduce it with a dbxml script.
    I am using dbxml-2.2.13 on RedHat WS 4.0.
    My original application reads XML data from files, and adds them
    one at a time to a DbXML document using XmlModify.addAppendStep
    and XmlModify.execute. At a particular document (call it "GLU.xml") it
    segfaults during the XmlModify.execute call. It is not malformed data in
    the file, because if I remove some files that are loaded at an earlier stage,
    GLU.xml is loaded quite happily and the segfault happens later. Changing
    my application so that it exits just before reading GLU.xml, and loading GLU.xml's
    data into the container file using the dbxml shell's "append" command produces
    the same segfault. The stacktrace is below. Steps #0 to #7 inclusive are the
    same as the stacktrace I got when using the Python API.
    Can anyone give me any suggestions? I could send the dbxml container file and
    dbxml script to anyone who would be prepared to take a look at this problem.
    Regards,
    Peter.
    #0  ~NsEventGenerator (this=0x9ea32f8) at NsEventGenerator.cpp:110
    110                     _freeList = cur->freeNext;
    (gdb) where
    #0  ~NsEventGenerator (this=0x9ea32f8) at NsEventGenerator.cpp:110
    #1  0x009cacef in DbXml::NsPullToPushConverter8::~NsPullToPushConverter8$delete ()
        at /scratch_bernoulli/pkeller/dbxml-2.2.13/install/include/xercesc/framework/XMLRefInfo.hpp:144
    #2  0x00a5d03c in DbXml::NsDocumentDatabase::updateContentAndIndex (this=0x96b7a60,
        new_document=@0x96e3608, context=@0x96a3fc8, stash=@0x96a4098) at ../scoped_ptr.hpp:44
    #3  0x009a71b1 in DbXml::Container::updateDocument (this=0x96a71d0, txn=0x0, new_document=@0x96e3608,
        context=@0x96a3fc8) at shared_ptr.hpp:72
    #4  0x009b8465 in UpdateDocumentFunctor::method (this=0xb7d3a008, container=@0x96a71d0, txn=0x0, flags=0)
        at TransactedContainer.cpp:167
    #5  0x009b70c5 in DbXml::TransactedContainer::transactedMethod (this=0x96a71d0, txn=0x0, flags=0,
        f=@0xbff66500) at TransactedContainer.cpp:217
    #6  0x009b71e4 in DbXml::TransactedContainer::updateDocument (this=0x96a71d0, txn=0x0,
        document=@0x96e3608, context=@0x96a3fc8) at TransactedContainer.cpp:164
    #7  0x009d7616 in DbXml::Modify::updateDocument (this=0x96c1748, txn=0x0, document=@0xbff665b0,
        context=@0xbff669dc, uc=@0xbff669e4)
        at /scratch_bernoulli/pkeller/dbxml-2.2.13/dbxml/build_unix/../dist/../include/dbxml/XmlDocument.hpp:72
    #8  0x009d9c18 in DbXml::Modify::execute (this=0x96c1748, txn=0x0, toModify=@0x96a7280,
        context=@0xbff669dc, uc=@0xbff669e4) at Modify.cpp:743
    #9  0x009c1c35 in DbXml::XmlModify::execute (this=0xbff666c0, toModify=@0x96a7280, context=@0xbff669dc,
        uc=@0xbff669e4) at XmlModify.cpp:128
    #10 0x08066bda in CommandException::~CommandException ()
    #11 0x0805f64e in CommandException::~CommandException ()
    #12 0x08050c82 in ?? ()
    #13 0x00705de3 in __libc_start_main () from /lib/tls/libc.so.6
    #14 0x0804fccd in ?? ()
    Current language:  auto; currently c++

    Hi George,
    I can get the coredump with the following XML data (cut down from its original
    size of around 900Kb):
    <file name="GLU.xml">
    <_StorageUnit time="Wed Apr  5 11:06:49 2006" release="1.0.212"
    packageName="ccp.ChemComp" root="tempData" originator="CCPN Python XmlIO">
    <parent>
      <key1 tag="molType">protein</key1>
      <key2 tag="ccpCode">GLU</key2>
    </parent>
    <StdChemComp ID="1" code1Letter="E" stdChemCompCode="GLU" molType="protein" ccpCode="GLU" code3Letter="GLU" msdCode="GLU_LFOH" cifCode="GLU" merckCode="12,4477">
      <name>GLUTAMIC ACID</name>
      <commonNames>L-glutamic acid</commonNames>
    </_StorageUnit>
    <!--End of Memops Data-->
    </file>This happens when the data from 106 other files have been inserted beforehand
    (ranging in size from 1Kb to 140Kb). If I manipulate the order so that the above data
    is loaded earlier in the sequence, it inserts fine and I get the coredump when
    loading data from a different file.
    The actual XmlModify calls look something like:
      qry = mgr.prepare("/datapkg/dir[@name='dir1']/dir[@name='dir2']", qc)
      mdfy.addAppendStep(qry, XmlModify.Element, "",
                         '<file name='" + fileName + '">' +
                          data[pos:] + "</file>")
      mdfy.execute(XmlValue(doc), qc, uc)where data[pos:] points to the location in the mmap-ed file containing the
    above data just after the <?xml ...?> header.
    If you want to try to reproduce the crash at your end there are a couple of ways
    we could do it. I have just figured out that this forum software doesn't let me
    upload files or reveal my e-mail address in my profile, but you can contact me with
    username: pkeller; domain name: globalphasing.com and I can send the
    data to you.
    Regards,
    Peter.

  • Inventory data load issue

    HI all,
             We have 2 source system SAP4.7 and ECC6.0. I am using 3 data source BX, BF and UM.
    one year data we required from SAP4.7 and all data from ECC6.0.
    In SAP4.7 totally 7 years data available but we required only last one year. my doubt is if i extract last one year data weather  in my report OPENING STOCK and CLOSING STOCK show correctly?
    - SAP4.7 closing stock will be opening stock in ECC6.0 like the manner they will upload in source system.
    Getting data from two source system, what is the extraction steps i have to follow.
    Kindly give me yr suggestion.
    Thanks
    sara

    Hi,
    First you need to make sure the closing stock of R/3 4.7 is matching with the ECC 6.0 opening stock. It should be same, usually when the data cut over happens this will be addressed. you can cross check the same using T code - MB5B / MB52 etc.
    When you load the data using BX source, it is a full load one, it will be pulling the data for as on date. So it will be brining the data only from the ECC 6.0 system being connected and there wont be any issue with the data.
    BF - will bring the material movement, which is needed if you need to see the historic data. So do the loading in the normal manner. Split the load depending up on the data volume. While doing the set up table activity in 4.7 select only the needed period. do the same for the UM data source also.
    While setting up the delta, do it only for 6.0.
    Regards

  • Is there anyone else having a major probblem with excessive data usage with the iPhone 5??

    My wife and I are on a shared 4GB data plan, she has the Galaxy Note 2 and I have the iPhone 5, she's on her phone way more than I am, but I am using waaay more data than she is, double to be exact. She sends pictures all the time and all of that, I do not, but yet I am still using more data. I have cut off everything from using my cellular data except, facebook, twitter, instagram, and Onavo Count, I have deleted iClouds from my phone, my location is turned off, but yet I am still using a lot of data.. I was wondering, is this issue because of Verizon, or is it because of the iPhone 5? My boss has a iPhone 5 through AT&T and she's not using as much data as I am either and I am pretty sure she's on her phone more than I am as well and do not have half of everything I got not using cellular data cut off.. I been with Verizon since Feb. 2012 and I have always had this problem with data while my wife is using barely 1G a month, I am using almost 2G every month. I am not complaining, I just want to know if anyone else is experiencing the same thing and if so what kind of solutions have you found? Thanks in advance! I hope I hear from someone... preferably a Verizon representative.  

    Thank you for your response. I do not stream music or videos, I do not
    FaceTime, my apps are not automatically updating when using wifi, I
    manually update them so they do not use cellular date either. Yes I have
    been able to view my data sessions, and honestly its not adding up. This
    morning it said my phone used over 90,000 kilobytes and I do not know how.
    I am positive my wife uses her phone more than I do but yet I am using more
    data.. I've been to the Verizon store and they have not be able to help me
    at all, I really like the coverage I get from Verizon but my confusion on
    why my phone is using so much data may cause me to switch providers and I
    really do not want to, I just really want to get to the bottom of this
    situation. I can't even enjoy my phone because I have cut mainly everything
    off. Location is cut off, I've deleted iTunes off my phone, I got mainly
    all my applications to only be accessed through wife for when I get home. I
    cut off my cellular data as soon as I get home because I am under wife when
    I am home. My wife do not have to do anything of what I've just mentioned
    but yet I am still using more data than she. Is my phone defected,
    something has to be wrong here. Please I will really appreciate it if you
    or anyone can help me figure what is going on. Please, thank you!
    Sent from my iPhone
    On Nov 8, 2013, at 12:55 PM, Verizon Wireless Customer Support <

  • Data Migration_LSMW

    hi all,
    need information on data migration and possible methods of LSMW
    Thanks
    Swapna

    hi
    Can a give a Search on the Topic "Data Migration" & "LSMW" in the forum for valuable information,
    <b>Pls Do NOT Post UR Mail-Id, Lets all follow some Rules</b>
    Data Migration Life Cycle
    Overview : Data Migration Life Cycle
    Data Migration
    This document aims to outline the typical processes involved in a data migration.
    Data migration is the moving and transforming of data from legacy to target database systems. This includes one to one and one to many mapping and movement of static and transactional data. Migration also relates to the physical extraction and transmission of data between legacy and target hardware platforms.
    ISO 9001 / TickIT accredited
    The fundamental aims of certification are quality achievement and improvement and the delivery of customer satisfaction.
    The ISO and TickIT Standards are adhered to throughout all stages of the migration process.
    •     Customer Requirements
    •     Dependencies
    •     Analysis
    •     Iterations
    •     Data Cleanse
    •     Post Implementation
    •     Proposal
    •     Project Management
    •     Development
    •     Quality Assurance
    •     Implementation
    Customer Requirements
    The first stage is the contact from the customer asking us to tender for a data migration project. The invitation to tender will typically include the Scope /
    Requirements and Business Rules:
    &#61607;     Legacy and Target - Databases / Hardware / Software
    &#61607;     Timeframes - Start and Finish
    &#61607;     Milestones
    &#61607;     Location
    &#61607;     Data Volumes
    Dependencies
    Environmental Dependencies
    &#61607;     Connectivity - remote or on-site
    &#61607;     Development and Testing Infrastructure - hardware, software, databases, applications and desktop configuration
    Support Dependencies
    &#61607;     Training (legacy & target applications) - particularly for an in-house test team
    &#61607;     Business Analysts -provide expert knowledge on both legacy and target systems
    &#61607;     Operations - Hardware / Software / Database Analysts - facilitate system  housekeeping when necessary
    &#61607;     Business Contacts
    &#61607;     User Acceptance Testers - chosen by the business
    &#61607;     Business Support for data cleanse
    Data Dependencies
    &#61607;     Translation Tables - translates legacy parameters to target parameters
    &#61607;     Static Data / Parameters / Seed Data (target parameters)
    &#61607;     Business Rules - migration selection criteria (e.g. number of months history)
    &#61607;     Entity Relationship Diagrams / Transfer Dataset / Schemas (legacy & target)
    &#61607;     Sign Off / User Acceptance criteria - within agreed tolerance limits
    &#61607;     Data Dictionary
    Analysis
    Gap Analysis
    Identifying where differences in the functionalities of the target system and legacy system mean that data may be left behind or alternatively generating default data for the new system where nothing comparable exists on legacy.
    Liaison with the business is vital in this phase as mission critical data cannot be allowed to be left behind, it is usual to consult with the relevant business process leader or Subject Matter Expert (SME). Often it is the case that this process ends up as a compromise between:
    &#61607;     Pulling the necessary data out of the legacy system to meet the new systems functionality
    &#61607;     Pushing certain data into the new system from legacy to enable the continuity of certain ad hoc or custom in-house processes to continue.
    Data mapping
    This is the process of mapping data from the legacy to target database schemas taking into account any reformatting needed. This would normally include the derivation of translation tables used to transform parametric data. It may be the case at this point that the seed data, or static data, for the new system needs generating and here again tight integration and consultation with the business is a must.
    Translation Tables
    Mapping Legacy Parameters to Target Parameters
    Specifications
    These designs are produced to enable the developer to create the Extract, Transform and Load (ETL) modules. The output from the gap analysis and data mapping are used to drive the design process. Any constraints imposed by platforms, operating systems, programming languages, timescales etc should be referenced at this stage, as should any dependencies that this module will have on other such modules in the migration as a whole; failure to do this may result in the specifications being flawed.
    There are generally two forms of migration specification: Functional (e.g. Premise migration strategy) Detailed Design (e.g. Premise data mapping document)
    Built into the migration process at the specification level are steps to reconcile the migrated data at predetermined points during the migration. These checks verify that no data has been lost or gained during each step of an iteration and enable any anomalies to be spotted early and their cause ascertained with minimal loss of time.
    Usually written independently from the migration, the specifications for the reconciliation programs used to validate the end-to-end migration process are designed once the target data has been mapped and is more or less static. These routines count like-for-like entities on the legacy system and target system and ensure that the correct volumes of data from legacy have migrated successfully to the target and thus build business confidence.
    Iterations
    These are the execution of the migration process, which may or may not include new cuts of legacy data.
    These facilitate:
    &#61607;     Collation of migration process timings (extraction, transmission, transformation and load).
    &#61607;     The refinement of the migration code i.e. increase data volume and decrease exceptions through:
    &#61607;     Continual identification of data cleanse issues
    &#61607;     Confirmation of parameter settings and parameter translations
    &#61607;     Identification of any migration merge issues
    &#61607;     Reconciliation
    From our experience the majority of the data will conform to the migration rules and as such take a minimal effort to migrate ("80/20 rule"). The remaining data, however, is often highly complex with many anomalies and deviations and so will take up the majority of the development time.
    Data Cuts
    &#61607;     Extracts of data taken from the legacy and target systems. This can be a complex task where the migration is from multiple legacy systems and it is important that the data is synchronised across all systems at the time the cuts are taken (e.g. end of day processes complete).
    &#61607;     Subsets / selective cuts - Depending upon business rules and migration strategy the extracted data may need to be split before transfer.
    Freeze
    Prior to any iteration, Parameters, Translation Tables and Code should be frozen to provide a stable platform for the iteration.
    Data Cleanse
    This activity is required to ensure that legacy system data conforms to the rules of data migration. The activities include manual or automatic updates to legacy data. This is an ongoing activity, as while the legacy systems are active there is the potential to reintroduce data cleanse issues.
    Identified by
    •     Data Mapping
    •     Eyeballing
    •     Reconciliation
    •     File Integrities
    Common Areas
    &#61607;     Address Formats
    &#61607;     Titles (e.g. mrs, Mrs, MRS, first name)
    &#61607;     Invalid characters
    &#61607;     Duplicate Data
    &#61607;     Free Format to parameter field
    Cleansing Strategy
    &#61607;     Legacy - Pre Migration
    &#61607;     During migration (not advised as this makes reconciliation very difficult)
    &#61607;     Target - Post Migration (either manual or via data fix)
    &#61607;     Ad Hoc Reporting - Ongoing
    Post Implementation
    Support
    For an agreed period after implementation certain key members of the migration team will be available to the business to support them in the first stages of using the new system. Typically this will involve analysis of any irregularities that may have arisen through dirty data or otherwise and where necessary writing data fixes for them.
    Post Implementation fixes
    Post Implementation Data Fixes are programs that are executed post migration to fix data that was either migrated in an 'unclean' state or migrated with known errors. These will typically take the form of SQL scripts.
    Proposal
    This is a response to the invitation to tender, which comprises the following:
    Migration Strategy
    &#61607;     Migration development models are based on an iterative approach.
    &#61607;     Multiple Legacy / Targets - any migration may transform data from one or    more legacy databases to one or more targets
    &#61607;     Scope - Redwood definition / understanding of customer requirements, inclusions and exclusions
    The data may be migrated in several ways, depending on data volumes and timescales:
    &#61607;     All at once (big bang)
    &#61607;     In logical blocks (chunking, e.g. by franchise)
    &#61607;     Pilot - A pre-test or trial run for the purpose of proving the migration process, live applications and business processes before implementing on a larger scale.
    &#61607;     Catch Up - To minimise downtime only business critical data is migrated, leaving historical data to be migrated at a later stage.
    &#61607;     Post Migration / Parallel Runs - Both pre and post migration systems remain active and are compared after a period of time to ensure the new systems are working as expected.
    Milestones can include:
    &#61607;     Completion of specifications / mappings
    &#61607;     Successful 1st iteration
    &#61607;     Completion of an agreed number of iterations
    &#61607;     Delivery to User Acceptance Testing team
    &#61607;     Successful Dress Rehearsal
    &#61607;     Go Live
    Roles and Responsibilities
    Data Migration Project Manager/Team Lead is responsible for:
    &#61607;     Redwood Systems Limited project management
    &#61607;     Change Control
    &#61607;     Solution Design
    &#61607;     Quality
    &#61607;     Reporting
    &#61607;     Issues Management
    Data Migration Analyst is responsible for:
    &#61607;     Gap Analysis
    &#61607;     Data Analysis & Mapping
    &#61607;     Data migration program specifications
    &#61607;     Extraction software design
    &#61607;     Exception reporting software design
    Data Migration Developers are responsible for:
    &#61607;     Migration
    &#61607;     Integrity
    &#61607;     Reconciliation (note these are independently developed)
    &#61607;     Migration Execution and Control
    Testers/Quality Assurance team is responsible for:
    &#61607;     Test approach
    &#61607;     Test scripts
    &#61607;     Test cases
    &#61607;     Integrity software design
    &#61607;     Reconciliation software design
    OtherRoles:
    •     Operational and Database Administration support for source/target systems.
    •     Parameter Definition and Parameter Translation team
    •     Legacy system Business Analysts
    •     Target system Business Analysts
    •     Data Cleansing Team
    •     Testing Team
    Project Management
    Project Plan
    &#61607;     Milestones and Timescales
    &#61607;     Resources
    &#61607;     Individual Roles and Responsibilities
    &#61607;     Contingency
    Communication
    It is important to have good communication channels with the project manager and business analysts. Important considerations include the need to agree the location, method and format for regular meetings/contact to discuss progress, resources and communicate any problems or incidents, which may impact the ability of others to perform their duty. These could take the form of weekly conference calls, progress reports or attending on site
    project meetings.
    Change Control
    &#61607;     Scope Change Requests - a stringent change control mechanism needs to be in place to handle any deviations and creeping scope from the original project requirements.
    &#61607;     Version Control - all documents and code shall be version controlled.
    Issue Management
    &#61607;     Internal issue management- as a result of Gap analysis, Data Mapping, Iterations Output (i.e. reconciliation and file integrity or as a result of eyeballing)
    &#61607;     External issue management - Load to Target problems and as a result of User Acceptance Testing
    &#61607;     Mechanism - examples:
    &#61607;     Test Director
    &#61607;     Bugzilla
    &#61607;     Excel
    &#61607;     Access
    &#61607;     TracNotes
    Development
    Extracts / Loads
    &#61607;     Depending on the migration strategy, extract routines shall be written to derive the legacy data required
    &#61607;     Transfer data from Legacy and/or Target to interim migration environment via FTP, Tape, CSV, D/B object copy, ODBC, API
    &#61607;     Transfer data from interim migration environment to target
    Migration (transform)
    There are a number of potential approaches to a Data Migration:
    &#61607;     Use a middleware tool (e.g. ETI, Powermart). This extracts data from the legacy system, manipulates it and pushes it to the target system. These "4th Generation" approaches are less flexible and often less efficient than bespoke coding, resulting in longer migrations and less control over the data migrated.
    &#61607;     The Data Migration processes are individually coded to be run on a source, an interim or target platform. The data is extracted from the legacy platform to the interim / target platform, where the code is used to manipulate the legacy data into the target system format. The great advantage of this approach is that it can encompass any migration manipulation that may be required in the most efficient, effective way and retain the utmost control. Where there is critical / sensitive data migrated this approach is desirable.
    &#61607;     Use a target system 'File Load Utility', if one exists. This usually requires the use of one of the above processes to populate a pre-defined Target Database. A load and validate facility will then push valid data to the target system.
    &#61607;     Use an application's data conversion/upgrade facility, where available.
    Reconciliation
    Independent end-to-end comparisons of data content to create the necessary level of business confidence
    &#61607;     Bespoke code is written to extract required total figures for each of the areas from the legacy, interim and target databases. These figures will be totalled and broken down into business areas and segments that are of relevant interest, so that they can be compared to each other. Where differences do occur, investigation will then instruct us to alter the migration code or if there are reasonable mitigating factors.
    &#61607;     Spreadsheets are created to report figures to all levels of management to verify that the process is working and build confidence in the process.
    Referential File Integrities
    Depending on the constraints of the interim/target database, data may be checked to ascertain and validate its quality. There may be certain categories of dirty data that should be disallowed e.g. duplicate data, null values, data that does not match to a parameter table or an incompatible combination of data in separate fields as proscribed by the analyst. Scripts are written that run automatically after each iteration of the migration. A report is then generated to itemise the non-compatible data.
    Quality Assurance
    Reconciliation
    &#61607;     Horizontal reconciliation (number on legacy = number on interim = number on target) and Vertical reconciliation (categorisation counts (i.e. Address counts by region = total addresses) and across systems).
    &#61607;     Figures at all stages (legacy, interim, target) to provide checkpoints.
    File Integrities
    Scripts that identify and report the following for each table:
    &#61607;     Referential Integrity - check values against target master and parameter files.
    &#61607;     Data Constraints
    &#61607;     Duplicate Data
    Translation Table Validation
    Run after new cut of data or new version of translation tables, two stages:
    &#61607;     Verifies that all legacy data accounted for in "From" translation
    &#61607;     Verifies that all "To" translations exist in target parameter data
    Eyeballing
    Comparison of legacy and target applications
    &#61607;     Scenario Testing -Legacy to target system verification that data has been migrated correctly for certain customers chosen by the business who's circumstances fall into categories (e.g. inclusion and exclusion Business Rule categories, data volumes etc.)
    &#61607;     Regression Testing - testing known problem areas
    &#61607;     Spot Testing - a random spot check on migrated data
    &#61607;     Independent Team - the eyeballing is generally carried out by a dedicated testing team rather than the migration team
    UAT
    This is the customer based User Acceptance Test of the migrated data which will form part of the Customer Signoff
    Implementation
    Freeze
    A code and parameter freeze occurs in the run up to the dress rehearsal. Any problems post freeze are run as post freeze fixes.
    Dress Rehearsal
    Dress rehearsals are intended to mobilise the resources that will be required to support a cutover in the production environment. The primary aim of a dress rehearsal is to identify the risks and issues associated with the implementation plan. It will execute all the steps necessary to execute a successful 'go live' migration.
    Through the execution of a dress rehearsal all the go live checkpoints will be properly managed and executed and if required, the appropriate escalation routes taken.
    Go Live window (typical migration)
    &#61607;     Legacy system 'end of business day' closedown
    &#61607;     Legacy system data extractions
    &#61607;     Legacy system data transmissions
    &#61607;     Readiness checks
    &#61607;     Migration Execution
    &#61607;     Reconciliation
    &#61607;     Integrity checking
    &#61607;     Transfer load to Target
    &#61607;     User Acceptance testing
    &#61607;     Reconciliation
    &#61607;     Acceptance and GO Live
    ===================
    LSMW: Refer to the links below, can get useful info (Screen Shots  for various different methods of LSMW)
    Step-By-Step Guide for LSMW using ALE/IDOC Method (Screen Shots)
    http://www.****************/Tutorials/LSMW/IDocMethod/IDocMethod1.htm
    Using Bapi in LSMW (Screen Shots)
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    Uploading Material Master data using BAPI method in LSMW (Screen Shots)
    http://www.****************/Tutorials/LSMW/MMBAPI/Page1.htm
    Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots)
    http://www.****************/Tutorials/LSMW/Recording/Recording.htm
    Uploading Material master data using recording method of LSMW(Screen Shots)
    http://www.****************/Tutorials/LSMW/MMRecording/Page1.htm
    Step-by-Step Guide for using LSMW to Update Customer Master Records(Screen Shots) Batch Input method
    Uploading Material master data using Direct input method
    http://www.****************/Tutorials/LSMW/MMDIM/page1.htm
    Steps to copy LSMW from one client to another
    http://www.****************/Tutorials/LSMW/CopyLSMW/CL.htm
    Modifying BAPI to fit custom requirements in LSMW
    http://www.****************/Tutorials/LSMW/BAPIModify/Main.htm
    Using Routines and exception handling in LSMW
    http://www.****************/Tutorials/LSMW/Routines/Page1.htm
    Reward if USeful
    Thanx & regrads
    Naren

  • Cut and paste in JTree

    Hi!,
    I'm trying to implement cut copy and paste in JTree.
    I need some ideas/suggestions as to how I can achieve this within multiple trees. ie cut in one tree and paste it in another tree, both which are displayed on JInternalFrames.
    I tried implementing it on a single tree, which worked partially.
    I need any ideas that might work, I have no clue how to send data cut from one tree and paste it on another.
    any help will be welcome
    thanks

    Thanks for all your help,
    I now have a new problem, I implemented cut copy and paste
    however if I cut a node and try to paste it more than once, it doesnt work. Only the last position where I have pasted the node shows it.
    Once you do a getContents from the clipboard, does it remove it??
    I even tried to setContents to the clipboard after I get the contents, but that also doesnt work.
    Anyone have any ideas y/how to do this.
    thanks..

  • SQL (Select) based on Excel data

    Hi
    I have accounts & their status in a Excel file (like below)
    Account     status     
    1111     1
    22222     11
    I have like 300 rows in excel file
    (sometimes Account will have only 4digits, in that case I want to append '0' in the front)
    (sometimes status will have only 1/2digits, in that case I want to append '00', if its one digit and '0' if its two digit)
    I want to write a select statement based on those, for example (in this case)
    SELECT * FROM SUPPLIER
    WHERE (Account = '01111' AND Status = '001')
    OR (Account = '22222' AND Status = '011')
    How do I write the select for all 300 rows(in excel) at one time? I don't want to use(create) tables. And I am using Oracle version 9
    Thanks

    Hi,
    I assume this question is just about how to format the numbers, not about directly reading the Ecel file.
    To use different formats in different situations, all in the same column, use CASE to to choose the appropriate format, like this:
    WITH     my_excel_table     AS
    (     -- Begin test data
         SELECT     1 AS n     FROM dual     UNION ALL
         SELECT     22     FROM dual     UNION ALL
         SELECT     333     FROM dual     UNION ALL
         SELECT     4444     FROM dual     UNION ALL
         SELECT     55555     FROM dual     UNION ALL
         SELECT     666666     FROM dual     UNION ALL
         SELECT     7777777     FROM dual
    )     -- End test data, cut here
    SELECT     n
    ,     TO_CHAR     ( n
              , 'fm'     
                ||
                CASE
                   WHEN     n < 1000     THEN '000'
                   WHEN     n < 100000     THEN '00000'
                                  ELSE '9999999999'
                END
              )     AS f
    FROM     my_excel_table
    ORDER BY     n;The code above produces:
             N F
             1 001
            22 022
           333 333
          4444 04444
         55555 55555
        666666 666666
       7777777 7777777As written, this assumes the numbers are all non-negative integers, but it can be modified.
    Message was edited by:
    Frank Kulash
    Sorry if this confused you. I see now that it doesn't really answer your question.
    You can use TO_CHAR separately on the two columns:
    TO_CHAR (account, 'fm00000')
    TO_CHAR (status, 'fm000')
    or use LPAD, as Visu demonstrated.
    Only use these things for display. In the WHERE-clause (and similar places), you can use leading zeros if you like, but do not use quotes, that is, just say "account = 1111" and "status = 1", or, if you prefer, "account = 01111" and/or "status = 001".

  • Not all data displayed

    Good Afternoon;
    I have an rdl report written in 2008 R2. When running the report in BIDS, I get back all the data I expect. However, there is a difference as to what is displayed in the ReportViewer regular view vs. the Print Layout view. In the regular, I only see data
    up to a certain point. In the Print Layout, I see all the data.
    In Print Layout there are 310 pages to the report. It looks like the data cuts off on what would be page 256 in the print view, in the regular view. (No page breaks in regular view)
    Is there some kind of limit on how much data is displayed in the regular view?

    No, only 1 of 1. Looking at the row groups, there were no page breaks added in anywhere. So basically it's trying to put all the data on one page. Is there some sort of limit as to the amount of data a "page" can handle?

  • How To control the cutting edge of a wide report

    Hi All;
    I have a wide report with 3 panels horizontal per page, i.e. i have 3 physical pages
    for each logical page.
    Is there a way to control the cutting edge between page 1&2 and between page 2&3 so that it comes at a specified position to avoid data cutting ?
    Regards

    In general, you can just execute the transaction and then at the selection screen go:
    System->Status
    It will show the program name and you can just double click on it to see the source code.
    Rob

  • Over data for the month

    My plan renews on the 6th. Would it be cheaper to just use the data and pay what it adds up to or pay the 5$ to add one GB for just this month?

    Are you on pre-paid? If you are then your data cuts off when you use it up.

  • Executing oracle commands in shell using shell variable

    EOD_TYPE=`sqlplus -s scott/tiger@oracledb <<EOF
    set head off
    set feedback off
    set pages 0
    spool file1.dat
    select to_char(sysdate,'monddyyyy') from dual;
    spool off;
    EOF`
    a=$(head -1 file1.dat | cut -c1-9)
    echo $a
    EOD_TYPE1=`sqlplus -s scott/tiger@oracledb <<EOF1
    spool file2
    select * from $a ;
    EOF1`
    file2 contains "select * from mar132008 ;" instead of the data in the mar132008 table.
    where is the problem.

    I think the -s option might not work in your case.
    Try to run this script with
    sqlplus -s scott/tiger@oracledb @file1.dat
    Also add SPOOL OFF and exit to the end of the second script.
    btw: other useful set commands are
    SET TRIM ON
    SET TRIMSPOOL ON

  • IPSEC VPN clients can't reach internal nor external resources

    Hi!
    At the moment running ASA 8.3, with fairly much experience of ASA 8.0-8.2, I can't get the NAT right for the VPN clients.
    Im pretty sure it's not ACL's, although I might be wrong.
    The problem is both VPN users can reach internal resources, and vpn users cant reach external resources.
    # Issue 1.
    IPSEC VPN client cannot reach any local (inside) resources. All interfaces are pretty much allow any any, I suspect it has to do with NAT.
    When trying to access an external resource, the "translate_hits" below are changed:
    Auto NAT Policies (Section 2)
    1 (outside) to (outside) source dynamic vpn_nat interface
       translate_hits = 37, untranslate_hits = 11
    When trying to reach a local resource (10.0.0.0/24), the translate hits below are changed:
    5 (inside) to (outside) source static any any destination static NETWORK_OBJ_172.16.32.0_24 NETWORK_OBJ_172.16.32.0_24
        translate_hits = 31, untranslate_hits = 32
    Most NAT, some sensitive data cut:
    Manual NAT Policies (Section 1)
    <snip>
    3 (inside) to (server) source static NETWORK_OBJ_1.2.3.0_29 NETWORK_OBJ_1.2.3.0_29
        translate_hits = 0, untranslate_hits = 0
    4 (inside) to (server) source static any any destination static NETWORK_OBJ_10.0.0.240_28 NETWORK_OBJ_10.0.0.240_28
        translate_hits = 0, untranslate_hits = 0
    5 (inside) to (outside) source static any any destination static NETWORK_OBJ_172.16.32.0_24 NETWORK_OBJ_172.16.32.0_24
        translate_hits = 22, untranslate_hits = 23
    Auto NAT Policies (Section 2)
    1 (outside) to (outside) source dynamic vpn_nat interface
        translate_hits = 37, untranslate_hits = 6
    Manual NAT Policies (Section 3)
    1 (something_free) to (something_outside) source dynamic any interface
        translate_hits = 0, untranslate_hits = 0
    2 (something_something) to (something_outside) source dynamic any interface
        translate_hits = 0, untranslate_hits = 0
    3 (inside) to (outside) source dynamic any interface
        translate_hits = 5402387, untranslate_hits = 1519419
    ##  Issue 2, vpn user cannot access anything on internet
    asa# packet-tracer input outside tcp 172.16.32.1 12345 1.2.3.4 443
    Phase: 1
    Type: ACCESS-LIST
    Subtype:
    Result: ALLOW
    Config:
    Implicit Rule
    Additional Information:
    MAC Access list
    Phase: 2
    Type: ACCESS-LIST
    Subtype:
    Result: DROP
    Config:
    Implicit Rule
    Additional Information:
    Result:
    input-interface: outside
    input-status: up
    input-line-status: up
    Action: drop
    Drop-reason: (acl-drop) Flow is denied by configured rule
    Relevant configuration snippet:
    interface Vlan2
    nameif outside
    security-level 0
    ip address 1.2.3.2 255.255.255.248
    interface Vlan3
    nameif inside
    security-level 100
    ip address 10.0.0.5 255.255.255.0
    same-security-traffic permit inter-interface
    same-security-traffic permit intra-interface
    object network anywhere
    subnet 0.0.0.0 0.0.0.0
    object network something_free
    subnet 10.0.100.0 255.255.255.0
    object network something_member
    subnet 10.0.101.0 255.255.255.0
    object network obj-ipsecvpn
    subnet 172.16.31.0 255.255.255.0
    object network allvpnnet
    subnet 172.16.32.0 255.255.255.0
    object network OFFICE-NET
    subnet 10.0.0.0 255.255.255.0
    object network vpn_nat
    subnet 172.16.32.0 255.255.255.0
    object-group network the_office
    network-object 10.0.0.0 255.255.255.0
    access-list VPN-TO-OFFICE-NET standard permit 10.0.0.0 255.255.255.0
    ip local pool ipsecvpnpool 172.16.32.0-172.16.32.255 mask 255.255.255.0
    ip local pool vpnpool 172.16.31.1-172.16.31.255 mask 255.255.255.0
    nat (inside,server) source static NETWORK_OBJ_1.2.3.0_29 NETWORK_OBJ_1.2.3.0_29
    nat (inside,server) source static any any destination static NETWORK_OBJ_10.0.0.240_28 NETWORK_OBJ_10.0.0.240_28
    nat (inside,outside) source static any any destination static NETWORK_OBJ_172.16.32.0_24 NETWORK_OBJ_172.16.32.0_24
    object network vpn_nat
    nat (outside,outside) dynamic interface
    nat (some_free,some_outside) after-auto source dynamic any interface
    nat (some_member,some_outside) after-auto source dynamic any interface
    nat (inside,outside) after-auto source dynamic any interface
    group-policy companyusers attributes
    dns-server value 8.8.8.8 8.8.4.4
    vpn-tunnel-protocol IPSec
    default-domain value company.net
    tunnel-group companyusers type remote-access
    tunnel-group companyusers general-attributes
    address-pool ipsecvpnpool
    default-group-policy companyusers
    tunnel-group companyusers ipsec-attributes
    pre-shared-key *****

    Hi,
    I don't seem to get a reply from 8.8.8.8 no, kind of hard to tell as it's an iphone. To me, all these logs simply says it works like a charm, but still I can get no reply on the phone.
    asa# ICMP echo request from outside:172.16.32.1 to outside:4.2.2.2 ID=6912 seq=0 len=28
    ICMP echo request translating outside:172.16.32.1/6912 to outside:x.x.37.149/46012
    ICMP echo reply from outside:4.2.2.2 to outside:x.x.37.149 ID=46012 seq=0 len=28
    ICMP echo reply untranslating outside:x.x.37.149/46012 to outside:172.16.32.1/6912
    ICMP echo request from outside:172.16.32.1 to outside:4.2.2.2 ID=6912 seq=256 len=28
    ICMP echo request translating outside:172.16.32.1/6912 to outside:x.x.37.149/46012
    ICMP echo reply from outside:4.2.2.2 to outside:x.x.37.149 ID=46012 seq=256 len=28
    ICMP echo reply untranslating outside:x.x.37.149/46012 to outside:172.16.32.1/6912
    ICMP echo request from outside:172.16.32.1 to outside:4.2.2.2 ID=6912 seq=512 len=28
    ICMP echo request translating outside:172.16.32.1/6912 to outside:x.x.37.149/46012
    ICMP echo reply from outside:4.2.2.2 to outside:x.x.37.149 ID=46012 seq=512 len=28
    ICMP echo reply untranslating outside:x.x.37.149/46012 to outside:172.16.32.1/6912
    asa# show capture capo
    12 packets captured
       1: 08:11:59.097590 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
       2: 08:11:59.127129 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
       3: 08:12:00.103876 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
       4: 08:12:00.133293 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
       5: 08:12:01.099253 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
       6: 08:12:01.127572 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
       7: 08:12:52.954464 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
       8: 08:12:52.983866 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
       9: 08:12:56.072811 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
      10: 08:12:56.101007 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
      11: 08:12:59.132897 802.1Q vlan#2 P0 x.x.37.149 > 4.2.2.2: icmp: echo request
      12: 08:12:59.160941 802.1Q vlan#2 P0 4.2.2.2 > x.x.37.149: icmp: echo reply
    asa# ICMP echo request from outside:172.16.32.1 to inside:10.0.0.72 ID=6912 seq=0 len=28
    ICMP echo reply from inside:10.0.0.72 to outside:172.16.32.1 ID=6912 seq=0 len=28
    ICMP echo request from outside:172.16.32.1 to inside:10.0.0.72 ID=6912 seq=256 len=28
    ICMP echo reply from inside:10.0.0.72 to outside:172.16.32.1 ID=6912 seq=256 len=28
    ICMP echo request from outside:172.16.32.1 to inside:10.0.0.72 ID=6912 seq=512 len=28
    ICMP echo reply from inside:10.0.0.72 to outside:172.16.32.1 ID=6912 seq=512 len=28
    ICMP echo request from outside:172.16.32.1 to inside:10.0.0.72 ID=6912 seq=768 len=28
    ICMP echo reply from inside:10.0.0.72 to outside:172.16.32.1 ID=6912 seq=768 len=28
    asa# show capture capi
    8 packets captured
       1: 08:15:44.868653 802.1Q vlan#3 P0 172.16.32.1 > 10.0.0.72: icmp: echo request
       2: 08:15:44.966456 802.1Q vlan#3 P0 10.0.0.72 > 172.16.32.1: icmp: echo reply
       3: 08:15:47.930066 802.1Q vlan#3 P0 172.16.32.1 > 10.0.0.72: icmp: echo request
       4: 08:15:48.040082 802.1Q vlan#3 P0 10.0.0.72 > 172.16.32.1: icmp: echo reply
       5: 08:15:51.028654 802.1Q vlan#3 P0 172.16.32.1 > 10.0.0.72: icmp: echo request
       6: 08:15:51.110086 802.1Q vlan#3 P0 10.0.0.72 > 172.16.32.1: icmp: echo reply
       7: 08:15:54.076534 802.1Q vlan#3 P0 172.16.32.1 > 10.0.0.72: icmp: echo request
       8: 08:15:54.231250 802.1Q vlan#3 P0 10.0.0.72 > 172.16.32.1: icmp: echo reply
    Packet-capture.
    Phase: 1
    Type: CAPTURE
    Subtype:
    Result: ALLOW
    Config:
    Additional Information:
    MAC Access list
    Phase: 2
    Type: ACCESS-LIST
    Subtype:
    Result: ALLOW
    Config:
    Implicit Rule
    Additional Information:
    MAC Access list
    Phase: 3
    Type: ROUTE-LOOKUP
    Subtype: input
    Result: ALLOW
    Config:
    Additional Information:
    in   172.16.32.1     255.255.255.255 outside
    Phase: 4
    Type: ACCESS-LIST
    Subtype: log
    Result: ALLOW
    Config:
    access-group inside_access_in in interface inside
    access-list inside_access_in extended permit ip any any log
    Additional Information:
    Phase: 5
    Type: IP-OPTIONS
    Subtype:
    Result: ALLOW
    Config:
    Additional Information:
    Phase: 6
    Type: INSPECT
    Subtype: np-inspect
    Result: ALLOW
    Config:
    Additional Information:
    Phase: 7     
    Type: DEBUG-ICMP
    Subtype:
    Result: ALLOW
    Config:
    Additional Information:
    Phase: 8
    Type: NAT
    Subtype:
    Result: ALLOW
    Config:
    nat (inside,outside) source static any any destination static NETWORK_OBJ_172.16.32.0_24 NETWORK_OBJ_172.16.32.0_24
    Additional Information:
    Static translate 10.0.0.72/0 to 10.0.0.72/0
    Phase: 9
    Type: HOST-LIMIT
    Subtype:
    Result: ALLOW
    Config:
    Additional Information:
    Phase: 10
    Type: VPN    
    Subtype: encrypt
    Result: ALLOW
    Config:
    Additional Information:
    Phase: 11
    Type: ACCESS-LIST
    Subtype: log
    Result: ALLOW
    Config:
    access-group outside_access_out out interface outside
    access-list outside_access_out extended permit ip any any log
    Additional Information:
    Phase: 12
    Type: FLOW-CREATION
    Subtype:
    Result: ALLOW
    Config:
    Additional Information:
    New flow created with id 5725528, packet dispatched to next module
    Result:
    input-interface: inside
    input-status: up
    input-line-status: up
    output-interface: outside
    output-status: up
    output-line-status: up
    Action: allow

Maybe you are looking for

  • OIM11gR2 - How to migrate an Application Instance Form

    Hello, I'm trying to migrate an Application Instance Form from my Dev env to my QA env. My target system is SAP I performed the following steps in Dev: 1. Installed and configured the SAP Connector (no problem here) 2. Created a sandbox 3. Created an

  • Finder still showing me the files I just deleted

    when I enter the FINDER and delete any file in it, it disappears from the list but continues to be in the cover flow.

  • Material download to CRM from R/3

    Materials created in R/3 are downloaded via the MBDoc ‘PRODUCT_MAT’ . In this BDoc there is a structure called ‘COMM_PROD_VAR ‘ whose description says variant data. The same structure is present in SBDoc as ‘PRDCT_OBJECT’. In this the segment ‘CUCFG’

  • Can i browse time capsule like a normal external hard drive

    im lookign to buy a time caplulse... is it act like a shared network drive? like can i store stuff on there and get on a different computer and access them??? or would i be better with an airport extreme and pluging a extrenal hard drive into that??

  • ITunes 6 Upgrade - Error -48

    I recently had to do a system restore on my computer (Windows XP) and returned it to factory settings, therefore having to reinstall everything including my iPod. The disc I currently have has an older version of iTunes on it, and when I install it e