Best Practice SAP HANA Unify tables

I have 2 tables from differents systems With The same structure . We need unify. It´s possible Do It with Calculate View, and after the start this analytics and anothers Calculate View ?.
Same Name, and different Schema
Thanks

I have 2 tables from differents systems With The same structure . We need unify. It´s possible Do It with Calculate View, and after the start this analytics and anothers Calculate View ?.
Same Name, and different Schema
Thanks

Similar Messages

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • Best Practice SAP HealthCare Portal

    Hi SAP Team,
    I have a following questions about the SAP HealthCare integration with SAP Portal, What is the best practice for implementation in these case? Its necessary to develop the application using webdynpro or Now I available the SAP Portal Business Package for these solution?
    Best Regards
    Roberto

    I  am also search on net & found some links & files about it, it found this PPT file on this topic, hope this will helps you.
    http://net.educause.edu/ir/library/powerpoint/EDU03146.pps
    [healthcare technology|http://www.nx3corp.com/]

  • JDeveloper 11.1.1.3: Best Practice for Checkboxes in Table

    Hi there,
    I'm having problems with checkboxes inside the table component.
    Can someone please fill me in as to what the best practice is to use checkboxes inside a table ?
    In the database, we are storing values Y and N.
    Thanks,
    Mark

    Hi Mark,
    I suppose you are talking about ADF Faces applications. If so, then I have two prefered approaches tested and used in real practice:
    *1) First approach: Create a simple converter, define it in the faces-config.xml and set it in the <af:selectBooleanCheckbox> tags' "converter" attribute, for example:*
    package mypackage;
    import javax.faces.application.FacesMessage;
    import javax.faces.component.UIComponent;
    import javax.faces.context.FacesContext;
    import javax.faces.convert.Converter;
    import javax.faces.convert.ConverterException;
    public class BooleanYNConverter implements Converter {
      public BooleanYNConverter() {
      public Object getAsObject(FacesContext facesContext, UIComponent uiComponent, String string) {
        if (string==null) return null;
        String s = string.trim();
        if (s.length()==0) return null;
        if (s.equalsIgnoreCase("true")) return "Y";
        if (s.equalsIgnoreCase("false")) return "N";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + string + " to Y/N. It must be either true or false",
            "Cannot convert " + string + " to Y/N. It must be either true or false" );
        throw new ConverterException(errorMessage);
      public String getAsString(FacesContext facesContext, UIComponent uiComponent, Object object) {
        if (object == null) return "";
        if (object.equals("Y")) return "true";
        if (object.equals("N")) return "false";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + object + " to true/false. It must be either Y or N",
            "Cannot convert " + object + " to true/false. It must be either Y or N" );
        throw new ConverterException(errorMessage);
    }In faces-config.xml:
      <converter>
        <converter-id>BooleanYNConverter</converter-id>
        <converter-class>mypackage.BooleanYNConverter</converter-class>
      </converter>In JSF page:
    <af:selectBooleanCheckbox ... converter="BooleanYNConverter"/>
    N.B. If you use this approach, the ViewObject attribute's Control Type should be set to "Default" instead of "Checkbox" (see the attribute's Control Hints section in the dialog box)!
    *2) Second approach: In the PageDef define a button binding for Y/N values and map the corresponding item in the table binding to this button binding. In this way you will remap VO attribute's Y/N value to true/false as how the checkbox component expects it. Neither converters nor additional configuration is necessary, but you have to do this on each checkbox field again:*
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.56.60" id="TestPagePageDef" Package="view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables"/>
        <iterator Binds="DeptViewRO" RangeSize="25" DataControl="AppModuleDataControl" id="DeptViewROIterator"/>
      </executables>
      <bindings>
        <tree IterBinding="DeptViewROIterator" id="DeptViewRO">
          <nodeDefinition DefName="model.DeptViewRO" Name="DeptViewRO0">
            <AttrNames>
              <Item Value="DeptID"/>
              <Item Value="DeptName"/>
              <Item Value="Flag" Binds="MyFlag"/>
            </AttrNames>
          </nodeDefinition>
        </tree>
        <button IterBinding="DeptViewROIterator" StaticList="true" id="MyFlag">
          <AttrNames>
            <Item Value="Flag"/>
          </AttrNames>
          <ValueList>
            <Item Value="Y"/>
            <Item Value="N"/>
          </ValueList>
        </button>
      </bindings>
    </pageDefinition>In the sample above the target VO attribute is called Flag. Have a look at the line <tt><Item Value="Flag" Binds="MyFlag"/></tt>. This line does the magic.
    Hope I've been a bit helpful.
    Dimitar
    Edited by: Dimitar Dimitrov on Nov 13, 2010 1:53 PM
    There was a little mistake: Instead of BooleanYNConverter I had written BooleanYNCheckbox in the <af:selectBooleanCheckbox> tag.

  • Best practice SAP landscape

    Hello all,
    I would like to know if there is some kind of best practice regarding SAP landscape in a big company.
    For example is it recommended to have in the landscape a SAP Quality Assurance System open for customizing (transaction SCC4) so that quick customizing tests are performed at any moment, instead of customizing in Development system and then transports in QaS. (this can be very frustrating because for solving and testing an issue it's possible that numerous customizing tasks and reset of customzing is neccessary) ?
    How SAP compliant would this solution be?
    Thank you very much for your help!
    Daniel Nicula

    Hmmm, I do not know exactly if the question can be posed here in GRC related threads.
    But it seemed to me that it is somehow connected.
    Anyway, I agree with you that final customizing should be done in DEV and then transported in QAS.
    What i am not sure is if it is against SAP recommendations to have a QAS opened for customizing and try all the solutions for an issue. And in the end when you are sure of what you want to do and to obtain, then you do the customizing also in DEV and follow the normal transport route.
    Which can be the risks in case you have a QAS opened for customizing?
    Thank you.

  • Best practice to load a table that is a relationship between 2 parent tabs

    I need to know what is the best way to load a table that is a kind of relationship between 2 master tables and that is relating them. It is an append procedure not a replace procedure.
    should it be a sqlldr?
    what tool should be used?
    Thanks a lot.
    @}--->----->>----------
    Paola

    Yes it is inside of a file that is a kind of .txt, it is ASCII.
    I want to load without having problems with constraints at all.
    But the point is that it should be fast enough to avoid performance problems.
    Thank you.
    Paola
    @{--->----->>----------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best Practice to load multiple Tables

    Hi,
    Am trying to load around 9-10 tables in sql server.all these belong to a single project.
    table loads are just select * into destination table,using some joins.
    Is it good to have 1 stored proc to load all these tables or individual stored procs to load.
    All these tables are independent to each other.
    Appreciate your help !

    If you are using SQL Server 2008 and onwards take a look into this link (Minimal logging)
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-1.aspx
    http://blogs.msdn.com/sqlserverstorageengine/archive/2008/03/23/minimal-logging-changes-in-sql-server-2008-part-2.aspx
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Best Practice SAP connection

    Hello All
    I am just starting creating reports with CR2008 connected to SAP ECC 6.0.  I am suffering some performance issues when trying to create a report on tables VBAK, VBAP and LIPS. Report performance is extremely slow. I have a few questions on this:
    1. Can anyone point me to some documentation that would help me set up SAP and CR2008 for improved performance when reporting directly on tables. 
    2. Performance wise Is it better to report on an Classic Infoset rather than directly on a table in CR2008
    Any other tips would be greatly appreciated.
    Thanks
    Phillip

    RFC Statement                                                                               
    Function module name                                                   
      Source IP Address         192.168.3.220                                
      Source Server             cham-vm1_CV1_00                              
      Destination IP Address                                                 
      Destination Server                                                     
      Client/Server             Client                                       
      Conversation ID                                                        
      RFC Trace Rec. Status     2                                            
      Sent Bytes                0                                            
      Received Bytes            0                                            
      Total Sent Bytes                                                       
      Total Received Bytes                                                   
      ABAP program name         %_T002S0                                     
      RFC Time                           0                          
      Function module name      /CRYSTAL/OSQL_EXECUTE_INTERNAL                 
      Source IP Address                                                        
      Source Server             cham-vm1_CV1_00                                
      Destination IP Address    192.168.3.220                                  
      Destination Server                                                       
      Client/Server                                                            
      Conversation ID                                                          
      RFC Trace Rec. Status     5                                              
      Sent Bytes                0                                              
      Received Bytes            12.166                                         
      Total Sent Bytes                                                         
      Total Received Bytes                                                     
      ABAP program name                                                        
      RFC Time                  364328.935

  • Best practice for existing target table

    We want to develop mappings for existing target tables.
    1. The tables are imported from the datadictionary
    2. The tables are used in the mappings
    Is it a good idea to set deploy to no for these tables, to prevent them from beeing used by the deplomentmanager (default action:create)?
    Thanks your for a good practice
    Stp
    Message was edited by:
    user444776
    Message was edited by:
    user444776

    Hi,
    Yes, you are right. for the tables which already exist in the database schema the deployment actions should be "None". Unless you planned and made any changes to the table structures.
    Cheers
    Mahesh

  • Best practice for archiving FGA_LOG$ table?

    Hi all,
    I am running Oracle 11g Release 1, and have been asked to look at how to best archive the FA_LOG$ table. We are required to store the logs for 3 years. I ran the following query:
    select owner, segment_name, segment_type, bytes/1024/1024 "MB" from dba_segments
    where tablespace_name = 'AUDIT_TBS' AND rownum <=100
    AND bytes/1024/1024 > 1 order by bytes desc;
    and it shows that the FGA_LOG$ table is currently at 84861 MB!
    Is my best course of action to:
    1. Install the patch necessary to access the DBMS_AUDIT_MGMT package
    2. Move the table to a different tablespace
    3. Use the DataPump export utlity
    4. Cleanup the table with the DBMS_AUDIT_MGMT package?
    Thanks,
    David

    to remove from the table itself
    delete from fga_log$ where timestamp# < sysdate-(365*3);
    if you want to keep data outside
    expdp to a file called this_years_fga_data.dmp , zip up and keep wherever you like.
    truncate fga_log$
    repeat annually
    * edit, jsut to clarify the timestamp# column may depend on version.
    Edited by: deebee_eh on Apr 25, 2012 5:05 PM

  • Server Logon History table in master database has grown to 30GB. Is it a best practice to truncate this table?

    I have noticed that the master database in our staging env has grown to 30Gb with 299803769 records. I am not sure why but this has happened in the last one year. Just from yesterday to today (1/1/2014 to 1/2/2014) it has 629921 records. Is it
    a good practice to truncate or delete data from this table?
    Thanks for your inputs

    AFAIK that's not a system table.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • Best Practiced SAP Netweaver Portal

    Hi All,
    We are trying to determine what should be the breaking point between what the portal dev team handles and what becomes SAP Securities responsibility. We are thinking that security should take it over when assigning IViews to Roles and then Roles to Groups and lastly groups to users. This is a very high view and welcome as much detailed information as you can provide.
    Thanks,
    Mary Sims

    Hi Mary,
    I fully agree with John. Portal roles are used to build up navigation. Therefor they define, what a user would see (if he/she would have read permission).
    The mapping between user groups and portal roles, the assignment of users to roles and the setting of read permissions on certain iViews (pages) or folders in KM is a different thing. This is a security issue and should be done by security department.
    We face this responsibility discussion in all our portal projects. You just have to rethink the word 'role', which normally isn't used for structuring content. This helps clearing up things. But at the end of the day, its an organizational discussion which depends on your internal organization.
    At least, I made the experience, that it's best if you assign portal roles to portal groups but use the groups for setting the access control list - not the roles.
    Maybe another cent,
    Carsten

Maybe you are looking for

  • Sub:Stock report with sales unit of measurement - reg.,

    Hi guru's we have a requirement to have the stock report with sales unit of measure instead of base unit of measure can anybody guide me how to do this. thanks in advance tulja singh.

  • Can I use a PC's hard drive for Time Machine?

    I have upgraded my Apple iBook G4 to Mac OS X Leopard and want to take full advantage of all the applications on the OS. I am wondering if it is possible to use a Microsoft Windows PC's hard drive as the medium upon which Time Machine can use to back

  • Object name that already exists on the local directory service

    Hi, We have 3 domain controllers with Windows 2008 R2 all GC, and the forrest and domain level 2008 R2. We replaced one domain controller with a windows server 2012 r2, and kept the name and ip. Every thing seems ok with dcdiag and repadmin, but I ca

  • ME21N and document type description.

    Hi, in what table is the document type description? In EKKO-BSART I only see the document type but if I click on option (last button on the right) I can set the PO so that I can see document type and description (Show Keys All dropdown Lists). I don'

  • Logic projects transfered into Itunes query

    This might not be a typical question but if anyone can shed any light on this Id be very very grateful! When I finish a project in Express I bounce it out an MP3 so I can send it off to labels etc...A major advantage of then dragging the MP3 into Itu