Storing geometry deltas

At my workplace for spatial tables they have a data management process where when a new version of a spatial table is ready to load - they delete all the rows in the existing table and drop all indexes on the table. Then the new data is loaded into the table and the indexes are created again.
However, for a few of the spatial tables I work on, what I would like is to be able to still retain all historic geometry. For example the primary key is site_no, I want to compare the geometry changes for site_number 1 between 2007, 2008, 2009 etc.
At first I thought that I would put all the geometry (both old and current) into a single table with a date column:
CREATE TABLE all_geom_test (
SITE_NO                   NUMBER,
GEOMETRY_DATE    DATE DEFAULT SYSDATE NOT NULL,
GEOMETRY              MDSYS.SDO_GEOMETRY);
I then made a view which selected the most recent geometry for each site_no.
However, I have found there are a couple of problems with this: A site_no might exist in 2008 but then be deleted - therefore not exist - in next year's version of the spatial data set. But the view is still selecting the 2008 geometry. The other problem is that the all_geom_test table is going to get very big. For the first problem the only solution I can think of is to have a 'deleted_flag' column.
What would be a better way to store this geometry? How could I store only the deltas between versions or a base version?

996454 wrote:
However, for a few of the spatial tables I work on, what I would like is to be able to still retain all historic geometry. For example the primary key is site_no, I want to compare the geometry changes for site_number 1 between 2007, 2008, 2009 etc.
Interesting challenge. Have you had a look at Workspace Manager? Contents
I then made a view which selected the most recent geometry for each site_no. However, I have found there are a couple of problems with this: A site_no might exist in 2008 but then be deleted - therefore not exist - in next year's version of the spatial data set. But the view is still selecting the 2008 geometry.
It would be helpful if you give the definition of your view. There's no way of knowing why the view is still selecting 2008 geometry if we don't have sample data and the query that is not supposed to select it.
996454 wrote:
The other problem is that the all_geom_test table is going to get very big. For the first problem the only solution I can think of is to have a 'deleted_flag' column.
What would be a better way to store this geometry? How could I store only the deltas between versions or a base version?
History tends to become very very big very quickly. So what you have to look at before you implement a solution is if you really really really need that history to be available (other than your normal backups). If the answer is yes, then I would suggest to have a look at partitioning your table and especially your spatial index. Works great, the partitioned spatial indexes will prune your table for you without the need to put the partitioning column in your where clause, and performance is superb.
As for the second part of your question: What do you mean by "deltas"? The actual changes in vertices of your geometry? That would be spossible, but at a cost. I currently maintain a database for a customer who simply puts the system date and time in an "ObjectEndTime" column when the record is updated or deleted, which means you can select older versions of your geometry quite easily. But calculating the difference between geometries takes a bit more work, and besides: what if one of the other columns has changed but the geometrie hasn't? You'll get and empty geometry if you calculate the differences between the geometries, because there is no difference. So I think you need to go back to the drawing board and re-think what it is you want to achieve -  and have a serious look at Workspace Manager.
Of course if you can describe what exactly it is you want to achieve you can ask here: there's always lot's of people around here with brilliant ideas that I have used quite a lot

Similar Messages

  • Generic Delta 2

    Dear Expert
    I have a z Table consisting fields like company code , change date, etc.
    I have created a generic delta using change date as delta field.
    A program loads data to this table.
    Lets say I loaded all data for company code 1000 and there are other company code data I also need to load but have not done so.
    I replicate the datasource to BI 7.
    I have read guides on Generic delta and need clarification on its mechanism in more detail.
    ========
    1. Do we need to  Initialize Delta for Generic Delta?
    2. If  my infopackage used for the init is change date of yesterday, what happens when some days later, I load company code data 2000 into the z table and these data has change dates earlier than the init date?
    3. Does init work like this : lets say the data selection in the infopackage used for the init is :
    company code = 1000
    change date = yesterday.
    the 1st delta infopackage will be based on the change date of yesterday and the upper and lower limit?
    4. What if I subsequently load other company code data into the z table? How can these be loaded as delta by the existing delta? Or must there be another init? can this be done?
    5. Technically, if delta is loaded today, and 2 days later, I insert manually into z table a record having the same change date as the 1st delta change date.  How does the delta mechanism know about this? I seek clarification that this record will not be known by the delta and will not be picked up and loaded by the next delta infopackage because the change date is incremental and will never pick up and old change date and delta is based on change date selection.
    Hope you can enlighten me on the above doubts .
    Best regards
    Pascal

    Hi Pascal,
    1. Do we need to Initialize Delta for Generic Delta?
    Yes, you need to do init to capture the delta records.for this while designing you have to select Delta check box in data source maintenance screen.
    2. If my infopackage used for the init is change date of yesterday, what happens when some days later, I load company code data 2000 into the z table and these data has change dates earlier than the init date?
    The change date will be populated in Z table based on system, when you change any record today you won't get yesterday date.anyways in your question as change date is less than last delta run date it won't pick that records.
    3. Does init work like this : lets say the data selection in the infopackage used for the init is :
    company code = 1000
    change date = yesterday.
    the 1st delta infopackage will be based on the change date of yesterday and the upper and lower limit?
    The best practice is run the init with out any selections, unless when you have requirement that you need to create multiple init's on same data source.
    no need to give date selections in info package(already your delta is working on change date).If you need data for a particular company code, give company code other even do not give company code as selection.It will pick all the new or changed data from last delta run.
    4. What if I subsequently load other company code data into the z table? How can these be loaded as delta by the existing delta? Or must there be another init? can this be done?
    Do not give company code as selection in your delta info package.same delta IP will pick the data for all company codes.
    5. Technically, if delta is loaded today, and 2 days later, I insert manually into z table a record having the same change date as the 1st delta change date. How does the delta mechanism know about this? I seek clarification that this record will not be known by the delta and will not be picked up and loaded by the next delta infopackage because the change date is incremental and will never pick up and old change date and delta is based on change date selection.
    As i already told we cannot give change date manually, when you save a record this will be populated by system in the table.
    Delta settings for generic data source will be maintained in table ROOSGENDLM (here we will have last delta run time stamp on this data source).When you run delta next time, it will pick the data which was stored after delta time stamp.
    Regards,
    Venkatesh.

  • Minimize posting block time

    Hi Gurus,
    we are using inventory management in BW 3.5 , the data is loaded from SAP R/3 . for every maintenance operation in data sources (2LIS_03_Bf, ...) or upgrade, we have to do initialization in the R/3 side that means blocking all stock movements during the initialization (~ 8 Hours!) this situation is very inconvenient for the users.
    have you any solution to minimize the posting block time?
    thank you.

    hi,
    Thank you Dhanya, so if i get it right it's possible to have zero down time during setup and initialization with this method :
    1. delete data in infoprividers in BW
    2. Delete setup tables and delta queues in R/3.
    2. Run init from BW with no data transfer (supposition: this will setup delta queues).
    3. Run setup tables filling in R/3 (supposition: all documents posted while this operation is going on will be stored in delta queues)
    4. Run full repair load in BW
    5. Run delta load in BW.
    is this right?
    how about compression with and without marker update mentioned here? [https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328]
    we are using first scenario : Inventory management with non-cumulative key figures.
    Thank you.

  • Can an ext4 inode be forcefully deleted?

    ~ dmesg excerpts~
    [Fri Jun 1 16:30:45 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 16:30:45 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 16:30:45 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:00:39 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:00:39 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:00:39 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:30:34 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:30:34 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    [Fri Jun 1 17:30:34 2012] EXT4-fs error (device dm-2): ext4_lookup:1044: inode #16260502: comm dropbox: deleted inode referenced: 16277969
    ~
    [Fri Jun 1 08:29:14 2012] ata4.00: exception Emask 0x0 SAct 0x7 SErr 0x0 action 0x0
    [Fri Jun 1 08:29:14 2012] ata4.00: irq_stat 0x40000008
    [Fri Jun 1 08:29:14 2012] ata4.00: failed command: READ FPDMA QUEUED
    [Fri Jun 1 08:29:14 2012] ata4.00: cmd 60/00:00:e7:47:46/01:00:01:00:00/40 tag 0 ncq 131072 in
    [Fri Jun 1 08:29:14 2012] res 41/40:00:dc:48:46/00:00:01:00:00/40 Emask 0x409 (media error) <F>
    [Fri Jun 1 08:29:14 2012] ata4.00: status: { DRDY ERR }
    [Fri Jun 1 08:29:14 2012] ata4.00: error: { UNC }
    [Fri Jun 1 08:29:14 2012] ata4.00: configured for UDMA/133
    [Fri Jun 1 08:29:14 2012] ata4: EH complete
    [Fri Jun 1 08:29:16 2012] ata4.00: exception Emask 0x0 SAct 0x5 SErr 0x0 action 0x0
    [Fri Jun 1 08:29:16 2012] ata4.00: irq_stat 0x40000008
    [Fri Jun 1 08:29:16 2012] ata4.00: failed command: READ FPDMA QUEUED
    [Fri Jun 1 08:29:16 2012] ata4.00: cmd 60/00:10:e7:47:46/01:00:01:00:00/40 tag 2 ncq 131072 in
    [Fri Jun 1 08:29:16 2012] res 41/40:00:dc:48:46/00:00:01:00:00/40 Emask 0x409 (media error) <F>
    [Fri Jun 1 08:29:16 2012] ata4.00: status: { DRDY ERR }
    [Fri Jun 1 08:29:16 2012] ata4.00: error: { UNC }
    [Fri Jun 1 08:29:16 2012] ata4.00: configured for UDMA/133
    [Fri Jun 1 08:29:16 2012] ata4: EH complete
    smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.3.4-2-ARCH] (local build)
    Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
    === START OF INFORMATION SECTION ===
    Model Family: SAMSUNG SpinPoint F3
    Device Model: SAMSUNG HD103SJ
    Serial Number: S246J90B317366
    LU WWN Device Id: 5 0024e9 204c645a8
    Firmware Version: 1AJ10001
    User Capacity: 1,000,204,886,016 bytes [1.00 TB]
    Sector Size: 512 bytes logical/physical
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: 8
    ATA Standard is: ATA-8-ACS revision 6
    Local Time is: Fri Jun 1 21:08:06 2012 BST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status: (0x00) Offline data collection activity
    was never started.
    Auto Offline Data Collection: Disabled.
    Self-test execution status: ( 0) The previous self-test routine completed
    without error or no self-test has ever
    been run.
    Total time to complete Offline
    data collection: ( 9360) seconds.
    Offline data collection
    capabilities: (0x5b) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    Offline surface scan supported.
    Self-test supported.
    No Conveyance Self-test supported.
    Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 2) minutes.
    Extended self-test routine
    recommended polling time: ( 156) minutes.
    SCT capabilities: (0x003f) SCT Status supported.
    SCT Error Recovery Control supported.
    SCT Feature Control supported.
    SCT Data Table supported.
    SMART Attributes Data Structure revision number: 16
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x002f 100 100 051 Pre-fail Always - 312
    2 Throughput_Performance 0x0026 052 052 000 Old_age Always - 9287
    3 Spin_Up_Time 0x0023 071 069 025 Pre-fail Always - 9067
    4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 182
    5 Reallocated_Sector_Ct 0x0033 252 252 010 Pre-fail Always - 0
    7 Seek_Error_Rate 0x002e 252 252 051 Old_age Always - 0
    8 Seek_Time_Performance 0x0024 252 252 015 Old_age Offline - 0
    9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 7805
    10 Spin_Retry_Count 0x0032 252 252 051 Old_age Always - 0
    11 Calibration_Retry_Count 0x0032 252 252 000 Old_age Always - 0
    12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 747
    191 G-Sense_Error_Rate 0x0022 100 100 000 Old_age Always - 1
    192 Power-Off_Retract_Count 0x0022 252 252 000 Old_age Always - 0
    194 Temperature_Celsius 0x0002 064 059 000 Old_age Always - 28 (Min/Max 14/44)
    195 Hardware_ECC_Recovered 0x003a 100 100 000 Old_age Always - 0
    196 Reallocated_Event_Count 0x0032 252 252 000 Old_age Always - 0
    197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 1
    198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 20
    199 UDMA_CRC_Error_Count 0x0036 200 200 000 Old_age Always - 0
    200 Multi_Zone_Error_Rate 0x002a 100 100 000 Old_age Always - 332
    223 Load_Retry_Count 0x0032 252 252 000 Old_age Always - 0
    225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 747
    SMART Error Log Version: 1
    No Errors Logged
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Extended offline Completed without error 00% 7416 -
    # 2 Short offline Completed without error 00% 7399 -
    # 3 Short offline Completed: read failure 90% 7356 24107308
    # 4 Extended offline Completed: read failure 90% 7356 24107308
    # 5 Short offline Completed without error 00% 129 -
    2 of 2 failed self-tests are outdated by newer successful extended offline self-test # 1
    Note: selective self-test log revision number (0) not 1 implies that no selective self-test has ever been run
    SMART Selective self-test log data structure revision number 0
    Note: revision number not 1 implies that no selective self-test has ever been run
    SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
    1 0 0 Completed [00% left] (0-65535)
    2 0 0 Not_testing
    3 0 0 Not_testing
    4 0 0 Not_testing
    5 0 0 Not_testing
    Selective self-test flags (0x0):
    After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    One of my disks in a RAID 1 configuration is slowly developing more bad sectors (21 and counting), but this has been over a period of several months so I'm not particularly worried about short term failure. I have reason to believe they are due to vibration
    Is there a way of telling ext4 to delete the files/inodes in question (which are disposable) without rebooting or dismounting the filesystem to run fsck?
    My system is a remote headless server, and it wouldn't be the first time fsck has got stuck resulting in several hours of downtime before manual intervention.
    Any attempts to rm or otherwise access the files results in 'Input/Output error', which is strange in my mind given the disk is part of a raid (mdadm based) - surely it should simply pass over to the other disk? Mdadm doesn't appear to detect any issues and considers the raid healthy

    ChuckLD wrote:
    I'm beginning to notice that my disk space is getting limited on my external drive that Time Machine uses to do its backups. Is it possible to go into TimeMachine and delete individual items i.e., a movie etc. in order to regain disk space? If it does a back up of each movie or TV show, and at some point I no longer want/need them so I delete them, they are still on my TimeMachine Backup. I would like to also delete it from the Backup. Is that possible?
    Thanks !
    Chuck :- )
    It's probably not a good idea to change anything in the TM file system. The data are not actually stored as files but for the first time. After that they are stored as "deltas" or changes. When TM restores a file, it uses the original and all the bits and pieces to recreate the most recent file. Disturbing the TM structure is dangerous.
    TM will automatically start deleting old files when it gets full, so don't be concerned with filespace.
    If any old files are on TM that you don't want it to delete, simply restore them, let TM back them up again, and TM will eventually delete the old original copies, leaving your new ones.
    If you have a continual problem with the TM disk getting full, then you need a larger disk.

  • Can An Individual Item be Deleted from Time Machine Backup?

    I'm beginning to notice that my disk space is getting limited on my external drive that Time Machine uses to do its backups. Is it possible to go into TimeMachine and delete individual items i.e., a movie etc. in order to regain disk space? If it does a back up of each movie or TV show, and at some point I no longer want/need them so I delete them, they are still on my TimeMachine Backup. I would like to also delete it from the Backup. Is that possible?
    Thanks !
    Chuck :- )

    ChuckLD wrote:
    I'm beginning to notice that my disk space is getting limited on my external drive that Time Machine uses to do its backups. Is it possible to go into TimeMachine and delete individual items i.e., a movie etc. in order to regain disk space? If it does a back up of each movie or TV show, and at some point I no longer want/need them so I delete them, they are still on my TimeMachine Backup. I would like to also delete it from the Backup. Is that possible?
    Thanks !
    Chuck :- )
    It's probably not a good idea to change anything in the TM file system. The data are not actually stored as files but for the first time. After that they are stored as "deltas" or changes. When TM restores a file, it uses the original and all the bits and pieces to recreate the most recent file. Disturbing the TM structure is dangerous.
    TM will automatically start deleting old files when it gets full, so don't be concerned with filespace.
    If any old files are on TM that you don't want it to delete, simply restore them, let TM back them up again, and TM will eventually delete the old original copies, leaving your new ones.
    If you have a continual problem with the TM disk getting full, then you need a larger disk.

  • Why the duration for performing setup(2LIS_03_UM) is too long?

    Hello Expert,
    I am loading the 03 setup table now.
    For the 2LIS_03_BF, the duration is about 40 minutes and the data record in MC03BF0SETUP is about 15000.
    However, for the 2LIS_03_UM, it has run for about 1.5 hours and the data record in MC03UM0SETUP is only 580. Now the thread is still running.
    Do anyone know why?
    BTW, I have run the setup for UM twice yesterday. Because the permitted time for the user is used up, the operation was terminated.
    Thanks
    F-B-I

    Hi......
    Dont' delete entries of Delta Queue.
    When you do the next delta, the previous to previous request's delta records meant for repeatition are automatically deleted i.e confirmed.
    The delta entries are stored in delta q inspite of deltas being pulled in Bw, just in case if delta repeatition is needed.
    Also when you carry out a repeatition request for delta in Bw for the same delta(in case of failures etc), these get pulled in BW.
    Also don't delete queue in SMQ1......Deleting entries in the queue (be it outbound or inbound) is not a good idea. This will either lead to data loss or data inconsistency.
    As per my knowledge ..........I don't think we need to delete it...
    Regards,
    Debjani........

  • Minimize WLW build time?

    Hi!
    Is it possible to get Workshop only to build elements that actually has changed
    on the file system? In regular ant you typically achieve this by using the timestamp
    mechanisms.
    This would greatly decrease the build time, and thereby increase the WLW user
    experience significantly!
    Thanks in advance.
    Regards
    Vidar Moe

    hi,
    Thank you Dhanya, so if i get it right it's possible to have zero down time during setup and initialization with this method :
    1. delete data in infoprividers in BW
    2. Delete setup tables and delta queues in R/3.
    2. Run init from BW with no data transfer (supposition: this will setup delta queues).
    3. Run setup tables filling in R/3 (supposition: all documents posted while this operation is going on will be stored in delta queues)
    4. Run full repair load in BW
    5. Run delta load in BW.
    is this right?
    how about compression with and without marker update mentioned here? [https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328]
    we are using first scenario : Inventory management with non-cumulative key figures.
    Thank you.

  • Network type question

    Spatial,
    I have some data sets I'd like to load into node, link, and path tables, step 1 will be creating the tables. My data seems to describe a logical network (no link distances, no angles between nodes and links) except that many links are directed. What network type best fits this data?
    Thank you.

    Hi Brian,
    Are you storing geometry data? If so, then use a spatial network, if not, use a logical network. In Oracle 10g R1, a directed network requires storage of duplicate links if they are bidirected in a directed network. In Oracle 10g Release 2 a single link can be stored as bidirected in a directed network (however, the same cost applies in both directions).
    I hope I answered the question!
    Dan

  • HT5953 How can an ibook selection be deleted so it no longer shows up?  I would like Winnie the Pooh gone.

    How can I remove an iBook.  I want Winnie the Pooh gone.  Further, if I bought it, shouldn't I be able to toss it in the trash?

    ChuckLD wrote:
    I'm beginning to notice that my disk space is getting limited on my external drive that Time Machine uses to do its backups. Is it possible to go into TimeMachine and delete individual items i.e., a movie etc. in order to regain disk space? If it does a back up of each movie or TV show, and at some point I no longer want/need them so I delete them, they are still on my TimeMachine Backup. I would like to also delete it from the Backup. Is that possible?
    Thanks !
    Chuck :- )
    It's probably not a good idea to change anything in the TM file system. The data are not actually stored as files but for the first time. After that they are stored as "deltas" or changes. When TM restores a file, it uses the original and all the bits and pieces to recreate the most recent file. Disturbing the TM structure is dangerous.
    TM will automatically start deleting old files when it gets full, so don't be concerned with filespace.
    If any old files are on TM that you don't want it to delete, simply restore them, let TM back them up again, and TM will eventually delete the old original copies, leaving your new ones.
    If you have a continual problem with the TM disk getting full, then you need a larger disk.

  • Master Data Extraction-Questions

    Hi guys,
    I was referring SAP materials on Master Data Extraction...then I read something like...
    "In Master Data Datasources,some support delta and some donot.Out of those that support delta mechanism,some use DELTA QUEUE.Some donot use DELTA QUEUE functionality and it is generally incase of small volumes of Data.Then there are some other datasources which uses ALE CHANGE POINTERS for delta mechanism."
    1.Can anyone explain how can one do delta in case of small volumes of data without using DELTA QUEUE functionality?Whats the need to go for it when we have DELTA FUNCTIONALITY?
    2.How to do delta using ALE Change Pointers?Whats the need to gofor this when we have DELTA QUEUE functionality?
    Thanks in advance.
    Regards
    Schand

    Hi Des,
    I think you are explaining the difference between "Delta Update" and "Delta Queue".I am well aware of these two things.
    DELTA QUEUE---its a temporary storage for delta records in R3 system before they are loaded successfully into BI.
    DELTA UPDATE---its type of delta update of delta records from R/3 system to BI system.
    My QUESTION is:
    In both MasterData Datasources and Transaction Data datasources,some support delta and some donot.Usually delta records will be stored in DELTA QUEUE in ECC before uploading them into BI.But some MASTER DATA DATAOURCES,donot use DELTA QUEUE to store delta records in ECC before uploading them into BI and they do this in case of small volumes of data.then Do any of you know,how do they do if they are not using DELTA QUEUE?
    Second one,SAP materials mentioned,some other datasources use ALE change pointers to determine delta.In this case also,they donot use DELTA QUEUE to store delta records before uploading into BI.What are ALE change pointers?How do we make settings for this?
    Hope I explained better.
    Regards
    S

  • Difference between LV 6.1 and LV 7.0 - Date\Time Format

    I found different behviour of numeric controls in Date/Time format, or
    in "Seconds to Date\Time String" function in LV6.1 and LV7.0
    In LV 6.1:
    Absolute time in seconds is formated in control with date/time format,
    output depends on local time format on computer, where this VI runs.
    So if I start LabView on machine with specific time zone, and for
    example DST on, the output was changed by these settings. ANY number
    of second is handled by this settings.
    In LV 7.0:
    Output still depends on specific time zone, but the use of the DST
    on/off depends on absolute time value. So if the number represents
    absolute time which fits period when DST was off, the output is
    wihtout DST, even if the DST is on when this vi runs.
    The solution in LV 7.0 is much more better than in LV 6.1, but ...
    If data are measured and stored (store of absoulte time DBL) in one
    time zone, and if they are processed in different time zone (for
    exapmle where DST change is in different time), the data
    representation is wrong.
    In LV 6.1 it was not correct too, but anyway there was a way how to
    solve this problem:
    I stored absolute time DBL and GMT delta for every time stamp. When I
    process this stored data on different computer with different time
    zone, I recalculate time stamps in this way:
    absolute time + stored GMT delta - GMT delta of computer where data
    are presented.
    So in final I have correct time does not matter on time zone or any
    other settings which are on computer where are data presented.
    But on LV 7.0, the GMT delta is not constant for every data and this
    algorithm is useless. I can simply show correct time when data are
    from same time zone without any calculation, but it is almost
    imposible to correct show data
    from different time zone.
    My question is:
    Is there any "ini file item", which can tell LabView 7.0 to use time
    representation style as in LV 6.1 ?
    Thank you and best regards
    Jiri Hula

    Thanks for the well written question.
    Unfortunately, there is no ini token or any way of specifying LabVIEW use the LV 6.1 style.
    I am attaching a LabVIEW 7.0 VI that calculates the UTC Offset for a given time value (I don't think the same code would work correctly in LabVIEW 6.1). The VI comments are:
    The attached VI will take Daylight Saving and Time zone into account to compute the offset in seconds from UTC Time to the Local time (as specified by the computer). Note: This VI can aid in converting from Local Time to UTC time, but not in converting from one Timezone to another.
    I hope it helps. Basically, your data is in UTC (absolute time) as it has always been, but LabVIEW 7.0 changed the way it displays the UTC data (trying to be more correct). If you wish to display it the way it was in LabVIEW 6.1, things are going to get a little tricky. LabVIEW 6.1 displayed the same DBL absolute time differently depending on if the current computer time was in DST or not. To get this behavior in LabVIEW 7.0, the equation:
    absolute time + stored GMT delta - GMT delta of computer where data are presented
    is still correct, but for LabVIEW 7.0 the GMT deltas now vary depending on whether the absolute time is in DST or not. It may be possible to convert the GMT offset you have saved from LabVIEW 6.1 into a set or pair of UTC Offsets. This would require a knowledge of what the DST state was for each of the data points in question.
    The DST state or GMT/UTC offset that the computer currently is using may be obtained from the attached VI.
    The absolute time is stored in GMT/UTC so that in different time zones it will still equate to the same time, even though it will be displayed differently. Another data format (such as storing the Hour, Minute, Second, Day, Month, Year; or storing the date as a string) might be more appropriate if the time should be displayed as the same local time regardless of the time zone.
    I hope this helps. Please respond if you have any comments or questions.
    Shawn Walpole
    Attachments:
    GetUTCOffset.vi ‏52 KB

  • Anyone know how Delta Links are stored in a Clustered environment?

    We have a situation where we have a clustered EP6 production environment with 4 servers in the load balancer.
    However, we've found that on just one of the servers the Delta Link between one of our roles and its only workset is missing.
    Wanted to know if anyone could explain how these delta links are stored and/or what happens during transport import in a clustered environment (e.g. are components deployed only to the central database or deployed to each server in the cluster)?
    Thanks for any help/advice on this you can provide.

    Hi Steve,
    The delta links are stored in the PCD which is common for all nodes in the cluster (there is only one database).
    When importing portal content to a cluster it is stored in two locations: one in on the central instance and the other one is locally on the specific node to which it was imported.
    All other nodes are informed that there is a new content but will only upload it locally on demand (only when a user is calling this content).
    Please award points if you find this answer helpful.
    Regards,
    Amit.

  • How can I delte all stored emails from my firefox web-browser

    I have some of my friends who use my laptop and as I'm trying to sign in to my facebook account, I found their emails. This is really piss me off, I tried to search for answer of how to delete stored emails and I couldn't find an answer.
    I appreciate you help. I deleted the whole browser and re-install it but nothing has been changed.

    You may have a malware problem if you get unrelated pop-ups opening or are redirected to unrequested websites.
    Do a malware check with a few malware scan programs.<br />
    You need to use all programs because each detects different malware.<br />
    Make sure that you update each program to get the latest version of the database before doing a scan.<br />
    * http://www.malwarebytes.org/mbam.php - Malwarebytes' Anti-Malware
    * http://www.superantispyware.com/ - SuperAntispyware
    * http://www.safer-networking.org/en/index.html - Spybot Search & Destroy
    * http://www.lavasoft.com/products/ad_aware_free.php - Ad-Aware Free
    * http://www.microsoft.com/windows/products/winfamily/defender/default.mspx - Windows Defender: Home Page
    See also "Spyware on Windows": http://kb.mozillazine.org/Popups_not_blocked and [[Searches are redirected to another site]]

  • Tables storing Delta information

    Hello All,
       Is there any table in ECC that stores history of delta extraction done through a particular data source similar to RSREQDONE in BW.
    As roosprmsc just stores latset request,it is not sufficient.I need the penultimate succesful delta request.
    Thanks,

    Hi,
    BI DeltaQueue (RSA7)
    RSO3 SET UP DELTA FOR GENERIC DATA SOURCE
    RSUPDSIMULD -- Table for saving simulation data update
    RSUPDSIMULH -- Table for saving simulation data header information
    RSDLPSEL -- Selection Table for fields scheduler(Infpak's)
    RSLDPDEL -- Selection table for deleting with full update scheduler
    T Code : LBWQ --> QRFC related Tables
    TRFCQOUT,
    QREFTID,
    ARFCSDATA
    transaction code SE93 will let u get all transaction codes
    For more tables go thorugh the below link
    http://sapbwneelam.blogspot.com/2007/08/bw-useful-tables.html
    Transaction Codes for BW Developers
    Regards,
    Marasa.

  • Trying to get multiple cell values within a geometry

    I am provided with 3 tables:
    1 - The GeoRaster
    2 - The geoRasterData table
    3 - A VAT table who's PK is the cell value from the above tables
    Currently the user can select a point in our application and by using the getCellValue we get the cell value which is the PK on the 3rd table and this gives us the details to return to the user.
    We now want to give the worst scenario within a given geometry or distance. So if I get back all the cell values within a given geometry/distance I can then call my other functions against the 3rd table to get the worst scores.
    I had a conversation open for this before where JeffreyXie had some brilliant input, but it got archived while I was waiting on Oracle to resolve a bug (about 7 months)
    See:
    Trying to get multiple cell values within a geometry
    If I am looking to get a list of cell values that interact with my geometry/distance and then loop through them, is there a better way?
    BTW, if anybody wants to play with this functionality, it only seems to work in 11.2.0.4.
    Below is the code I was using last, I think it is trying to get the cell values but the numbers coming back are not correct, I think I am converting the binary to integer wrong.
    Any ideas?
    CREATE OR REPLACE FUNCTION GEOSUK.getCellValuesInGeom_FNC RETURN VARCHAR2 AS
    gr sdo_georaster;
    lb blob;
    win1 sdo_geometry;
    win2 sdo_number_array;
    status VARCHAR2(1000) := NULL;
    CDP varchar2(80);
    FLT number := 0;
    cdl number;
    vals varchar2(32000) := null;
    VAL number;
    amt0 integer;
    amt integer;
    off integer;
    len integer;
    buf raw(32767);
    MAXV number := null;
    r1 raw(1);
    r2 raw(2);
    r4 raw(200);
    r8 raw(8);
    MATCH varchar2(10) := '';
    ROW_COUNT integer := 0;
    COL_COUNT integer := 0;
    ROW_CUR integer := 0;
    COL_CUR integer := 0;
    CUR_XOFFSET integer := 0;
    CUR_YOFFSET integer := 0;
    ORIGINY integer := 0;
    ORIGINX integer := 0;
    XOFF number(38,0) := 0;
    YOFF number(38,0) := 0;
    BEGIN
    status := '1';
    SELECT a.georaster INTO gr FROM JBA_MEGARASTER_1012 a WHERE id=1;
    -- first figure out the celldepth from the metadata
    cdp := gr.metadata.extract('/georasterMetadata/rasterInfo/cellDepth/text()',
    'xmlns=http://xmlns.oracle.com/spatial/georaster').getStringVal();
    if cdp = '32BIT_REAL' then
    flt := 1;
    end if;
    cdl := sdo_geor.getCellDepth(gr);
    if cdl < 8 then
    -- if celldepth<8bit, get the cell values as 8bit integers
    cdl := 8;
    end if;
    dbms_lob.createTemporary(lb, TRUE);
    status := '2';
    -- querying/clipping polygon
    win1 := SDO_GEOM.SDO_BUFFER(SDO_GEOMETRY(2001,27700,MDSYS.SDO_POINT_TYPE(473517,173650.3, NULL),NULL,NULL), 10, .005);
    status := '1.2';
    sdo_geor.getRasterSubset(gr, 0, win1, '1',
    lb, win2, NULL, NULL, 'TRUE');
    -- Then work on the resulting subset stored in lb.
    status := '2.3';
    DBMS_OUTPUT.PUT_LINE ( 'cdl: '||cdl );
    len := dbms_lob.getlength(lb);
    cdl := cdl / 8;
    -- make sure to read all the bytes of a cell value at one run
    amt := floor(32767 / cdl) * cdl;
    amt0 := amt;
    status := '3';
    ROW_COUNT := (WIN2(3) - WIN2(1))+1;
    COL_COUNT := (WIN2(4) - WIN2(2))+1;
    --NEED TO FETCH FROM RASTER
    ORIGINY := 979405;
    ORIGINX := 91685;
    --CALCUALATE BLOB AREA
    YOFF := ORIGINY - (WIN2(1) * 5); --177005;
    XOFF := ORIGINX + (WIN2(2) * 5); --530505;
    status := '4';
    --LOOP CELLS
    off := 1;
    WHILE off <= LEN LOOP
    dbms_lob.read(lb, amt, off, buf);
    for I in 1..AMT/CDL LOOP
    if cdl = 1 then
    r1 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
    VAL := UTL_RAW.CAST_TO_BINARY_INTEGER(R1);
    elsif cdl = 2 then
    r2 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
    val := utl_raw.cast_to_binary_integer(r2);
    ELSIF CDL = 4 then
    IF (((i-1)*cdl+1) + cdl) > len THEN
    r4 := utl_raw.substr(buf, (i-1)*cdl+1, (len - ((i-1)*cdl+1)));
    ELSE
    r4 := utl_raw.substr(buf, (i-1)*cdl+1, cdl+1);
    END IF;
    if flt = 0 then
    val := utl_raw.cast_to_binary_integer(r4);
    else
    val := utl_raw.cast_to_binary_float(r4);
    end if;
    elsif cdl = 8 then
    r8 := utl_raw.substr(buf, (i-1)*cdl+1, cdl);
    val := utl_raw.cast_to_binary_double(r8);
    end if;
    if MAXV is null or MAXV < VAL then
    MAXV := VAL;
    end if;
    IF i = 1 THEN
    VALS := VALS || VAL;
    ELSE
    VALS := VALS ||'|'|| VAL;
    END IF;
    end loop;
    off := off+amt;
    amt := amt0;
    end loop;
    dbms_lob.freeTemporary(lb);
    status := '5';
    RETURN VALS;
    EXCEPTION
        WHEN OTHERS THEN
            RAISE_APPLICATION_ERROR(-20001, 'GENERAL ERROR IN MY PROC, Status: '||status||', SQL ERROR: '||SQLERRM);
    END;

    Hey guys,
    Zzhang,
    That's a good spot and as it happens I spotted that and that is why I am sure I am querying that lob wrong. I always get the a logic going past the total length of the lob.
    I think I am ok using 11.2.0.4, if I can get this working it is really important to us, so saying to roll up to 11.2.0.4 for this would be no problem.
    The error in 11.2.0.3 was an internal error: [kghstack_underflow_internal_3].
    Something that I think I need to find out more about, but am struggling to get more information on is, I am assuming that the lob that is returned is all cell values or at lest an array of 4 byte (32 bit) chunks, although, I don't know this.
    Is that a correct assumption or is there more to it?
    Have either of you seen any documentation on how to query this lob?
    Thanks

Maybe you are looking for

  • 0fi_Gl_4 see data in R/3 side

    Hi , I want to check the data 0fi_Gl_4 in R/3 side. Because some documents are missing... So need to check it out that is available in 0fi_Gl_4 or not

  • Delete Credit Card from Sales Order

    Hi Gurus I have some issue in releasing sales order to accounting... in this process the only solution I am finding is to delete the credit card info from the sales order. Do we have a BAPI or BDC is the only option. Thanks Ram Kumar

  • Audit enqueue and dequeue

    Hi, Could anyone help me how audit enqueue or dequeue? I've tried the follows statement, but it didn't work: audit DEQUEUE ANY QUEUE by access; audit ENQUEUE ANY QUEUE by access;

  • HP C6280 all-in-one - help

    My printer has been working great, but all of a sudden it prints backwards, in other words I would have to turn the paper over and read it through the back.  I've tried everything to get it back to normal.  I finally did click on "mirror image", and

  • Essbase Studio Drill-Through Report

    Is it possible to run an Essbase Studio drill-through report in a SmartView spreadsheet that is connected to a cube that was not deployed by Essbase Studio? If so, how do you associate the drill-through report with the cube?