SDO_PC, multiple SRIDs - best practise for data model?

Hi,
im using UTM and I am getting data covering two zones.
all my existing data is from zone A.
tables:
pointcloud
pointcloud_blk
now im getting data with very few points from zone A and most points from zone B. It was agreed that the data delivery will be in SRID for zone B.
so I tested whether this would work. I had two pointclouds. One with SRID A, another with SRID B. As soon as I put SRID B pointcloud inside, I could NO LONGER QUERY pointcloud with SRID A.
So it seems to be necessary to use at least another pointcloud_blk, f.e. pointcloud_blk_[srid].
Question: does another pointcloud_blk for each SRID suffice or do i also need a pointcloud table per SRID. the pointcloud table seems only interesting due to its EXTENT column. But on the other hand this could be queried by "function", since there are only 10 or so records (pointclouds) inside.
PLZ share your best practises. What does work, what not.

It is necessary to have one pointcloud_blk table for each SRID since there is a spatial index on that table.
As for the PointCloud table itself, it is up to you. You can have pointclouds with different SRIDs in that table.
But if you want to create spatial index on it, you have to use some function based index so that the index
sees one SRID for the table.
Since this table usually does not have many rows, this should work fine with one table for different SRIDs.
siva

Similar Messages

  • Best approach for Data Modelling.

    Hello Experts
    I am building a Customer Scorecard involving SD and Marketing in BI 7.0.
    There are a couple of existing DSOs, some pushing the data into InfoCubes and some don't. All the reporting is happening from MultiProvider sitting on top of these Data Targets.
    The team has a primitive design which says that I additional DSOs be created to extract data from the above mentioned couple of DSOs based on only the Objects that are needed for Customer Scorecard reporting.
    This means, I am creating a couple of DSOs as per the current design which is in place.
    Upon suggesting to only create a Customer Scorecard MultiProvider on top of the already existing couple of Data Targets (avoiding to recreate addtional DSOs and the hassles of loading and activating them and then loading the data into InfoCubes) and then create the BEx Queries on top of them, the Lead expressed his concerns about the impacts it could have on the existing Data Model and subsequent transports once the Model is complete..!
    What is the best practice to handle a situation like this? I see there are 3 ways to go ahead with this:
    1. Do as the Lead said, which means creating additional DSOs (extracting data from a couple of required existing DSOs, push this data into 1 InfoCube and then create a MultiProvider on top of this (be aware that there is another similar data model that I need to create which will also be embedded into this MultiProvider) and create BEx Reports from there.
    2. Create only the InfoCubes which will extract data from the already existing DSOs (avoid creation of additional DSOs) and then create a MP from where BEx Reports are created.
    3. Only create a MultiProvider on all the required and already existing DSOs and InfoCubes, making sure if reporting needs aggregated data for reporting or not and then create BEx Reports from there (avoid creation of additional DSOs, & ICs).
    Note: We use Rev-Track to do the Transports.
    Which one do you think would be the best way to go and what could be the implications? Eventually, the reporting is done in WAD.
    Thanks for your time in advance.
    Cheers,
    Chandu

    Hi,
    Case 1 and 2 have similarities. But its purely depend user needs.
    I think you may be know the difference between dso and cube.
    DSO - holds detailed level data
    Cube - holds aggregated data.
    As per you needs use any one target only, no need to use DSO---> cube flow for existing flows.
    you can decide which you want use DSO or Cube only.
    Case 3. if your requirement will suffice with existing dso and at reporting level if you can manage to get the required out put then you can with it. But as my guess with existing target your requirement may won't suffice your needs.
    About transports:
    You can create one Rev track and assign multiple transports to it.
    you can add and release transport one by one rather than all at a time.
    if you release all at a time you may get some inconsistency issue and TR won't be released.
    Thanks

  • Best Practise for Data Refresh & Hierarchy

    Hi,
    During a recent discussion with one of our BI user groups, the questions were raised as what the best practice are to handle the following two issues.
    Issue 1:
    If entries are posted to the prior periods in SAP R/3 (outside of the daily auto-refresh range), the current process is that the user group will ask us to conduct a manual refresh in BI for the prior periods which are effected.
    Question: Is it possible to set up a trigger in the system, so that BI knows which periods are changed and automatically refreshes data for those periods?
    Issue 2:
    If a hierarchy used in the reports is modified, there might be an adverse impact on the financial data the user group reports. The current process we have in place is to run a group of BI reports for both current year and prior year to make sure nothing is impacted, but there is limitation to this current process. What if there is no impact on current or prior year, but on the years prior to that?
    Question: What other global companies do to minimize such reporting impact, especially when they have hundreds of complex reports?
    If someone has any info on this, help me in sharing the same.
    Thanks all for your support.
    Regards,
    Murali

    Hi Srini,
    1. SAP suggestes to implement data archiving strategy  as early as possible to manage database growth .
    However pople think of archiving when they realise the  problems like large data volumes,slow system resonse time,performance issues etc...
    2. There is a proper way to implement Data Archiving . Database has to be anaylzed first for getting the top DB tables and Archiving objects.
    3. Based on the DB analysis ,Data archiving plan has to be implenemted according to data management guide.
    4. There is a minimum period known as residence time has to be completed before any data to be archived. Once the document is business completed and serverd its minimum required period in the Database ,it can be archived.
    5, Before going for data archiving there are many steps to be followed like analysis,configuration etc that you can see in details at the link below :
    http://help.sap.com/saphelp_47x200/helpdata/en/2e/9396345788c131e10000009b38f83b/frameset.htm
    let me know if this helps you .
    -Supriya
    Edited by: Supriya  Shrivastava on May 4, 2009 10:38 AM

  • Best Practise for Data Archiving

    Hi All,
    I have query in SAP that when business will do the data archving in their R3 systems.
    On which basis they will do data archiving?
    Is there any minimum period that data need's to be archived?
    before doing the data archiving what they will do ?
    Regards
    Srini

    Hi Srini,
    1. SAP suggestes to implement data archiving strategy  as early as possible to manage database growth .
    However pople think of archiving when they realise the  problems like large data volumes,slow system resonse time,performance issues etc...
    2. There is a proper way to implement Data Archiving . Database has to be anaylzed first for getting the top DB tables and Archiving objects.
    3. Based on the DB analysis ,Data archiving plan has to be implenemted according to data management guide.
    4. There is a minimum period known as residence time has to be completed before any data to be archived. Once the document is business completed and serverd its minimum required period in the Database ,it can be archived.
    5, Before going for data archiving there are many steps to be followed like analysis,configuration etc that you can see in details at the link below :
    http://help.sap.com/saphelp_47x200/helpdata/en/2e/9396345788c131e10000009b38f83b/frameset.htm
    let me know if this helps you .
    -Supriya
    Edited by: Supriya  Shrivastava on May 4, 2009 10:38 AM

  • Best practise for data exchnge Flash Lite 2.0?

    Hi all,
    At present I am using LoadVars to communicate between Flah
    Lite and my server is there a better way? I am building an
    applciation that needs constant information and updates, such as
    being able to send message to different users. At the moment I am
    checking all the nesaccary database info using LoadVars. This is
    called every 1-5secs for the different functions.
    This however is far from elegant and also the data charges
    can add up if the app is running for a long time.
    Can I use an socket driven solution? Is this possible with
    Flash Lite 2.0.
    I am using PHP to retrieve info from a MySql database.
    Thanks.
    Dev

    Ciao
    is not to Adobe to rebuild the Flash Lite player. Adobe
    license the player to mobile phones manifactures and they decide
    when/how to implement the FL player on their phones.
    If you want to know the roadmap, some hints here:
    http://www.biskero.org/?p=581
    Alessandro

  • What is the best practise for setting dirty flag of a page/view?

    For a page/view, normaylly there are 2 things to do for diry data:
    1. when it's clean, Save button is disabled, when it's dirty, save button is enabled.
    2. when it's dirty and the window is closed, a popup says "you have unsaved data, close will lose the data".
    My thought is: it must be handled at client side, because not all valuechange is auto submitted. E.g., you type the 1st letter of a string in a input box, the server side does not know it, but save button should be enabled immediately.
    Is it possible to capture all valueChange events in a page or a view at client side?
    I'm not sure what is the best practise for setting dirty flag? If there is better solution? Does ADF provide facility for this?

    public void save(ActionEvent event){
    boolean formValid = isFormValid();
       if (formValid) {
      save button is enabled.
        private boolean isFormValid() {
            boolean valid = true;
            if (Check Condition 1) {
                valid = false;
               showErrorMessage1();
            if (Check Condition 2) {
                valid = false;
               showErrorMessage1();
            return valid;
        private void showErrorMessage1() {
                    when it's dirty and the window is closed, a popup says "you have unsaved data, close will lose the data".

  • Best Practise for connecting to Ethernet based device

    Hi,
    I have inherited a system where we have a cDAQ-9181 controlling an vehicle access barrier, with a LabView application on  a PC talking to it via Ethernet.
    (The application is very simple - press a button > send a value to the 9181 unit > opens the barrier )
    All works fine most of the time.
    ( We occasionally get network related errors. The LabView application sometimes thinks another PC has reserved the unit, or gives “error 89130 - device not available for routing” )
    The users would now like to be able to easily run the application from a second PC ( not at the same time ), but this seems to be a problem. If I exit the application on PC “A” and run it on PC “B” it struggles to reserve the chassis, and throws the “89130” error and I have to restart the unit via MAC.
    While I’m a “veteran” control programmer, I’m new to LabView, and would be very grateful for any pointers on “best practise” for talking to devices via Ethernet, or any specific suggestions for handling multiple PCs talking to a single device.
    Thank You.
    Tim.

    Hi Tim,
    Thank you for your post and welcome to the NI forums.
    There are lots of knowledgebase articles on our website and you should be able to find documentation for most of our hardware.
    There is a good troubleshooting guide for cDAQ Ethernet here (http://ae.natinst.com/public.nsf/web/searchinternal/e67b4e4749f378ff862577270059bd4b?OpenDocument) - it outlines the steps to take to ensure you have a stable a connection as possible. You may have already seen it, but the quick-start guide for your specific device may also be worth consulting for best practices. Are these helpful?
    As for using more than one PC - this shouldn't be too much of an issue. I would expect that the resource isn't being closed correctly - when you exit the App on PC 'A', how are you closing off the resource?
    Best regards,
    Eden S
    Applications Engineer
    National Instruments UK & Ireland

  • Best practises for replication

    Hi,
    I want to know what is best practise for duration of replicaation of database between two Cisco ACS.
    Regards,
    Atif.

    Hi Atif,
    The replication time interval should always be higher.
    Reason: Everytime you replicate the data it requires ACS services to restart so doing this frequently may affect your production enviroment.
    However, if you want to replicate internal user's password then there is an option to replicate password changes right awayvwithout a full replication.  You can enable this option under System Configuration -> Local Password Management.  With this enabled you could potentially set the replications to a larger interval.
    It also depend how often you do changes in your ACS. If its normal then I would say set it to every sunday 12:00 PM.
    This is how replication happens:
    The primary ACS stops its authentication and creates a copy of the ACSinternal database components that it is configured to replicate. During this
    step, if AAA clients are configured properly, those that usually use the primary ACS fail over to another ACS. The primary ACS resumes its authentication service.
    After the preceding events on the primary ACS, the database replication process continues on the secondary ACS. The secondary ACS stops its authentication service and replaces its database components with the database components that it received from the primary ACS. During this step, if AAA clients are configured properly, those that usually use the secondary ACS fail over to another ACS. The secondary ACS resumes its authentication service.
    HTH
    Regards,
    JK
    Plz rate helpful posts-

  • Wats the best practise for performance

    Hi all,
    In my out line i have 15 dimensions and for one dimension i have 39000 members so wat is the best practise for performance , If we have more dimensions and more meebers is there any problem for performance
    so wat is the best practise for dimensions and members??
    Thanks in advance??

    If it is ASO application it is not a problem.
    If it is a BSO application surely it will hit the performance.
    More dimensions will create performance issues.
    If the said 39000 members dimension is a Flat dimension. It will be another issue.
    If BSO is obvious try to split into two models.
    Create intermediate groupings for the Filat dimension.

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • Using CVS in SQL Developer for Data Modeler changes.

    Hi,
    I am fairly new to SQL Developer Data Modeler and associated version control mechanisms.
    I am prototyping the storage of database designs and version control for the same, using the Data Modeler within SQL Developer. I have SQL Developer version 3.1.07.42 and I have also installed the CVS extension.
    I can connect to our CVS server through sspi protocol and external CVS executable and am able to check out modules.
    Below is the scenario where I am facing some issue:
    I open the design from the checked out module and make changes and save it. In the File navigator, I look for the files that have been modified or added newly.
    This behaves rather inconsistently in the sense that even after clicking on refresh button, sometimes it does not get refreshed. Next I try to look for the changes in Pending Changes(CVS) window. According to the other posts, I am supposed to look at the View - Data Modeler - Pending Changes window for data modeler changes but that shows up empty always( I am not sure if it is only tied to Subversion). But I do see the modified files/ files to be added to CVS under Versioning - CVS - Pending Changes window. The issue is that when I click on the refresh button in the window, all the files just vanish and all the counts show 0. Strangely if I go to Tools - Preferences - Versioning - CVS and just click OK, the pending changes window gets populated again( the counts are inconsistent at times).
    I believe this issue is fixed and should work correctly in 3.1.07.42 but it does not seem to be case.
    Also, I m not sure if I can use this CVS functionality available in SQL Dev for data modeler or should I be using an external client such as Wincvs for check in/ check out.
    Please help.
    Thanks

    Hi Joop,
    I think you will find that in Data Modeler's Physical Model tree the same icons are used for temporary Tables and Materialized Views as in SQL Developer.
    David

  • Best  Course  for Data Warehousing

    Hi,
                I am planning to join data warehousing course .I heard there is lot courses in data warehousing .
    Data warehousing with ETL tools or
    Data warehousing with Crystal Reports or
    Data warehousing with Business object or
    Data warehousing with Informatica or
    Data warehousing with Bo-Webel or
    Data warehousing with Cognos or
    Data warehousing with Data Stage or
    Data warehousing with MSTR or
    Data warehousing with Erwin or
    Data warehousing with oracle.
    Please suggest me which best to choose and  which have more scope because I  don't know  the ABC of data warehousing  but I have some experience in oracle.
    Is it must that I need work experience in data warehousing  then only can get a job ?Please tell me which is the best book for data warehousing which should start from scratch.  Please  give your suggestions about to my queries.
    Thanks & Regards,
    Raji

    Hi,
    Basically Datawarehouse is a concept.To develop DW , we need two tools mainly. One is ETL tool and other one is Reporting tool .
    The few famous ETL tools are
    Informatica
    Data Stage
    Few famous Reporting tools are
    Crystal Reports
    Cognos
    Business object
    As a DW developer you should aware of atleat one ETL tool and atleat one Reporting tool.The combination is your choice.It better to finout the best combination in point of job market , and then learn them.
    Erwin is Datamodel tool. It can aslo be used in DW implementation. You have already have experience on ORacle,So my adivce is go for Data warehousing with oracle or Data warehousing with Informatica .And learn one reporting tool.I donot is there any reporting tool available from ORACLE.
    My suggestion on books.
    Fundamentals of Datawarehouse by PaulRaj Ponnai and
    Datawarehouse toolkit.
    http://www.inmoncif.com/about.html is one of the best site for Datawarehouse.
    With rgds,
    Anil Kumar Sharma .P
    Assigning points is the way to say thanks in SDN site.

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best Practise for rebooting ISE Nodes?

    Hello Community,
    I administer an ISE installation with two nodes (I am not an ISE Specialist, my job is just to manage the user/mac-adresses... but now I have to move my ISE Nodes from one VMWare Cluster to another VMWare Cluster.
    (Both VMWare environments are connected to our enterprise network, but are different environments. vMotion not possible)
    I would shutdown ISE02, move it to our new VMWare environment and start it again.
    Than I would do this with our ISE01 Node...
    Are there any best practises for doing this? (Shutdown application first, stopl replikation etc)?
    Can I really simply reboot an ISE Node - or have I consider something bevor I doing this? After I doing this?
    Any tasks after reboot?
    Thank you for any answer!
    ISE01    
    Administration, Monitoring, Policy Service    
    PRI(A), SEC(M)
    ISE02    
    Administration, Monitoring, Policy Service    
    SEC(A), PRI(M)

    There is a lot to consider here.  If changing environments means changing IP Address and IP Scopes, then your policies, profiles, and dACLs would also have to change among other things.  If this is the case, create a new ISE VM in the new environment using the built in evaluation license and recreate the deployment from the old environment using the addressing scheme of the new environment.  Then spin-up a new Secondary node and register it on the Primary.  Once this is done, you can re-host the license from your old environment onto your new environment.  You can use this tool to re-host:
    https://tools.cisco.com/SWIFT/LicensingUI/loadDemoLicensee?FormId=3999
    If IP Addressing is to remain the same, it gets simpler. 
    First, and always, perform a configuration and operational backup.
    If downtime is not an issue, or if you have a maintenance window of an hour or so: Simply shut down both nodes.  Transfer them to the New Environment and turn them on, Primary Node first, of course.
    If downtime is an issue, shut down the Secondary Node and transfer it to the New Environment.  Start the Secondary Node and when it is up, shut down the Primary Node.  Once services on the primary node have stopped, promote the Secondary Node to Primary Node.
    Transfer the OLD Primary Node to the New Environment and turn it on.  It should assume the role of Secondary Node.  If it does not, assign that role through the GUI.
    Remember, the correct way to shut down an ISE node is:
    application stop ise
    halt
    By using these commands, the risk of database corruption decreases by about 90% (Remember to always backup).
    Please Rate Helpful posts and mark this question as answered if, in fact, this does answer your question.  Otherwise, feel free to post follow-up questions.
    Charles Moreton

  • Best practices for data migration

    Hi All,
    This thread is useful for those who can use their opportunity to share the ideas and knowledge for making better and best use of data migration using Business Objects Data Services.

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

Maybe you are looking for