Replication better approach

Hi All,
I am doing transaction Replication of one database and want better approach and some clarification
for example:-
I have A database and want to replicate in two different location It create two publication for database A in Local Publication .My question is if it create two publication then it send the data two times to distributor ?
And one more question I want to replicate same DB to two different location in US and Kolkatta and my publisher db also in Kolkata and I want one replication server in Kol and one in US  then I am using remote distributor in Kol and Now this is my approch:-
A- Publisher in Kol
B-Distribution in Kol
C- Subscriber in Kol
D- Subscriber in US
Please let me know what i have to use pull and push Subscription and any other approch which i have to use sending data through WAN
if any modification required then please help .
Thanks in advance.

I am still a little confused about what you are trying to do.
It sounds like you have a single publisher/distributor and are replicating to 2 different subscribers. You want to make this as efficient as possible.
Your topology is the best, because if data does not go to subscriber 2 but makes it to subscriber 1, you want it to also go to subscriber 2 when it comes on line. This store and replay method ensures that replication will be able to pick up where it left
off and to ensure that both subscribers get the same dataset, although they might not get the same data at the same time - ie Kol being closer to the publisher will have a lower latency and be more up to date than the US server.
looking for a book on SQL Server 2008 Administration?
http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

Similar Messages

  • Which is better approach to manage sharepoint online - PowerShell Script with CSOM or Console Application with CSOM?

    Which is better approach to manage sharepoint online - PowerShell Script with CSOM or Console Application with CSOM?
    change in sharepoint scripts not require compilation but anything else?

    Yes, PowerShell is great, since you can quick change your code without compilation.
    SP admin can write ps scripts without specific tools like Visual Studio.
    With powershell you can use cmdlets,
    which could remove a lot of code, for example restarting a service.
    [custom.development]

  • What's the better approach?

    Hi, I was just making a program that access a DB through JDBC, and I got myself into this dilemma
    What's the better approach to make a connection to a DB?
    approach #1(use of singleton pattern)
    import java.sql.*;
    public class DBConnection {
        private ResultSet rs;
        private Connection conn;
        private PreparedStatement ps;
        private static boolean singleton = false;
        private DBConnection() throws Exception{
            Class.forName("driverPath").newInstance();
         conn = DriverManager.getConnection("url", "user", "pass");
         singleton = true;
        public static DBConnection getInstance() throws Exception{
            if(singleton)
             return null;
            return new DBConnection();
        protected void finalize() throws Throwable {
            //close the connection and release resources...
            singleton = false;
        //Methods to make DB querys and stuff.     
    }approach #2 (make a connection only when doing querys)
    public class DBConnection {
        private ResultSet rs;
        private Connection conn;
        private PreparedStatement ps;
        public DBConnection() throws Exception {
            Class.forName("driverPath").newInstance();
        //Just some random method to access the DB
        public ArrayList<Row> selectAllFromTable() {
            ArrayList<Row> returnValue = new ArrayList<Row>();
         try {
             conn = DriverManager.getConnection("url", "user", "pass");
             //make querys and fill the arraylist with rows from the table
         } catch(Exception ex) {
             returnValue = null;
             ex.printStackTrace();
         } finally {
             if(ps != null)
                 ps.close();
                if(rs != null)
              rs.close();
             if(conn != null)
              conn.close();
         return returnValue;
    }I know this classes maybe don't even compile and I don't handle the Exceptions, I'm just trying to make a point about how to manage the connection
    So, what it's the better approach in your opinions? #1? #2? neither?

    Hi,
    I'm resurrecting this thread to ask is this approach OK?
    I'm trying to make a single MySql JDBC connection accessible throughout the model.
    I'm planning to use it in a Swing application. Whilst I realise the Swing apps are inherently multi-threaded, everything I plan to do can (I think) be done within the constraint that all access to model happens on the EDT, and the user will just have to wear any unresponsiveness.
    package datatable.utils;
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.SQLException;
    abstract public class MySqlConnection {
         public static final String URL = "jdbc:mysql://localhost/test";
         public static final String USERNAME = "keith";//case sensitive
         private static final String PASSWORD = "chewie00";//case sensitive
         private static final Connection theConnection;
         static {
              String driverClassName = "com.mysql.jdbc.Driver";
              try {
                   Class.forName(driverClassName);
                   theConnection = DriverManager.getConnection(URL, USERNAME, PASSWORD);
              } catch (Exception e) {
                   throw new DAOException("Failed to register JDBC driver class \""+driverClassName+"\"", e);
         public static Connection get() {
              return(theConnection);
    }Is there a better solution short of c3po? Which I played with, but couldn't work out how to configure.
    Thanx guys (and uj),
    keith.

  • What is a better approach, batch or prepared statement?

    Hi,
    In my application, the user account property will be retrieved from DB and the last login time will be updated after a successful sign on. There are two query statements in this procedure; select and update. What is a better approach in JDBC to deal with the two queries: batch or prepared statement?
    Thanks.

    Hi, Duffy,
    Thanks for your input.
    I would like to use PreparedStatement with batch update. From my reading, however, it is not suitable to use PreparedStatement with batch update. At less, I haven't see a single example on the tutorial or article sections of this site. I am not sure the following is valid or not:
                   stmt = connection.prepareStatement(INSERT_SMT_QUERY_STR);
                   stmt.setLong(1, details.getRegardID());
                   stmt.setLong(2, details.getWriterID());
                   stmt.setInt(3, details.getVisibility());
                   stmt.setString(4, details.getSubject().trim());
                   stmt.setString(5, details.getContent().trim());
    stmt.addBatch() <-- what goes into here?

  • What would be the better approach at showing a course is complete?

    (OT: I'm being evlauated to do some online software simulations using Captivate so I'm trying to merge my Authorware "thinking" with Captivate)
    I have a 14 slide simulation; task to edit preference files. Scoring is not necessary but getting to slide 14 is all that is necessary.
    Presently I have a click object on slide 13 whose reporting options is "Include in Quiz" with adding 1 point.
    So I have a course with Objective ID=Quiz10111 Interaction ID=Interaction11184 and a score of 1.
    But I also note in the reporting preferences I could use "Slide views only." So maybe this is all that is necessary?
    I'm not that savy (yet) with LMS's but if I wanted to write this data out as a tab delimited file on the hosting server what should I consider? Authorware had a function "WriteExtFIle." Would a button on the last slide set to execute Javascript be a better approach?
    Another idea looking for a point of view. If I get the project I am considering a 2 month contract Adobe Acrobat Connect Pro to get more confortable with this approach. Would this help?
    FYI...I just downloaded the Captivate 4 manual. Last modified on May 19th according to the doscument.
    Thanks

    Regarding your function hanging, that seems to be because your loop never ends. Also the inner query is missing a WHERE clause:
    Select Translate(comnt_rec.comment_text,'%#$^',' ') into repvalue from test_comnt;  <-- How many rows will this bring back?However I don't think you meant to query the test_comnt table at all - it looks as though you just wanted to assign the result of the TRANSLATE expression (btw please, TRANSLATE() or translate() but not Translate() - the parser doesn't care but it drives me nuts), in which case just
    repvalue := translate(comnt_rec.comment_text,'%#$^',' ');or even include the expression in the cursor itself and save yourself the trouble.
    Another thing that drives me nuts, although not the rest of the world apparently, is the name "c1" for a cursor. Why don't you name your variables "v1", "v2" etc? If you had two cursors would you seriously name the second one "c2"?
    While I'm at it, what kind of a name is "test_comnt"?

  • Better approach for adding a new assignment block in a standard component

    Hi
    I need to add a new assignment block in the standard component bt116h_srvo. There are two approaches :
    1. create a new view in the component bt116h_srvo
    2. create a custom component and embed it into bt116h_srvo using component usage.
    Please tell which one is a better approach and why ??
    Thanks,
    Swati.

    Thanks for the quick reply lakshmi. However I am sure there is no possibility of reuse . My  main concern here is tat will patch upgradation in future would have any impact on the views added directly in standard component  or any other risk in adding view directly.
    Rgds,
    Swati

  • XDD Vs Offset file usage -Which is better approach

    Hi,
    We are designing a new workspace (on Documaker 11.5) for a client sending a flat file input. To reduce the impact of offset driven changes we are planning to use either of the below:
    1) XDD
    2) OFFSET.DAT file for which we can update AFGJOB.JDT with folowing rules.
    ;LoadEXTOFFS;1;OFFSET.DAT;
    ;CUSAltSearchRec;1;;
    Based on the copybook/layout of the incoming file we should be able to build a XDD or OFFSET.DAT file.
    The reason I am asking this question is we already have some experience with offset file approach, and we wanted to explore if XDD is a better approach from maintanance perspective.
    Regards..

    The XDD extract mapping method is standard support.
    The OFFSET.DAT file you mention was a pseudo-custom implementation originally submitted by CSC/PMSC for their customers. It was added to the product as a curtesy and not really promoted as a mainstream feature (if that is a good term). As such, that feature is not specifically targeted by QA testing or included in internal regression results. I'm not even sure it is documented. I doubt you would be able to get much help (or quickly) via Support if you had questions.
    Therefore, the official recommendation would be to use the XDD, but if you have experience with the OFFSET.DAT method and are comfortable with the functionality, then the feature should still be available.

  • Validating 10 numeric fields - any better approach?

    Hello All,
    I've an internal table with 15 fields. First 5 fields are all text fields and the next 10 fields are of type DMBTR. I have to validate each of these fields. Currently I've coded as follows:
    data: begin of itab1 occurs 0,
         text1     type     char4,
         text2       type     char2,
         text3       type char10,
         text4     type char6,
         text5     type     char4,
         amt1             type     dmbtr,
         amt2             type     dmbtr,
         amt3             type     dmbtr,
         amt4             type     dmbtr,
         amt5             type     dmbtr,
         amt6        type     dmbtr,
         amt7             type     dmbtr,
         amt8             type     dmbtr,
         amt9             type     dmbtr,
         amt10     type     dmbtr,
          end of itab1.
    perform some steps and
    collect data into itab1.
    loop at itab1.
    if itab1-amt1 CA 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' or
       itab1-amt1 CA 'abcdefghijklmnopqrstuvwxyz' or
       itab1-amt1 CA '~`!@#$%^&*()-_=+[{]}\|;:'",<.>/?'.
    write:/ 'error'.
    endif.
    if itab1-amt2 CA 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' or
       itab1-amt2 CA 'abcdefghijklmnopqrstuvwxyz' or
       itab1-amt2 CA '~`!@#$%^&*()-_=+[{]}\|;:'",<.>/?'.
    write:/ 'error'.
    endif.
    if itab1-amt10 CA 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' or
       itab1-amt10 CA 'abcdefghijklmnopqrstuvwxyz' or
       itab1-amt10 CA '~`!@#$%^&*()-_=+[{]}\|;:'",<.>/?'.
    write:/ 'error'.
    endif.
    endloop.
    Do we have a better approach to do this, instead of repeating the same step for 10 fields?
    Thanks in advance.
    Siri

    or using field symbols
    data: fieldname(11) type c value 'ITAB1-AMT',
             idx(2) type c.
    field-symbls <field> type any.
    loop at itab1.
       do 10 times.
          write sy-index to idx. "' 1', ' 2'...'10'
          condense idx. "'1', '2'...'10'
          concatenate fieldname idx into fieldname. "ITAB-AMT1, ITAB-AMT2 ...
          assign (fieldname) to <field>. "now <field> stores appropriate value
          if <field> CA 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' or
             <field> CA 'abcdefghijklmnopqrstuvwxyz' or
             <field> CA '~`!@#$%^&*()-_=+[{]}\|;:'",<.>/?'.
            write: / 'error'.
          endif.
       enddo.
    endloop.
    Regards
    Marcin

  • All components of fusion middleware in one domain - is this better approach

    Pls advise, is this better approach to have all the components of Fusion Middleware (SOA, B2B, BPM, WebCenter, Content server) in one domain itself in cluster environment for production. If not, pls advise the better architecture for production environment..
    Regards,
    Suneel Jakka

    Suneel,
    It's a good idea to use split domain approach (multiple domains in one MW Home) for large enterprise deployments as it makes maintenance very easy. Lot of customers have been using this approach and they are quite happy about it. Following benefits you get while using split domain approach -
    1. Patching becomes flexible where patching of one component will not affect other
    2. System downtime will decrease as because of issue with one component, only one domain will be affected
    3. Reduced risk of incompatibility or classpath issues which arise because of different components (jars and third party utilities) being used in each product
    4. Modular integration and more layers of security
    If you are using all SOA, B2B, BPM, WebCenter, Content server and they have significant load then better have below domains -
    1. Dedicated domain for B2B (as it is a gateway product and will be having communication over internet)
    2. Shared domain for BPM and SOA (if load is high on both then better have separate domain for each)
    3. Shared domain for WebCenter & Content server (if load is high on both then better have separate domain for each)
    I also recommend to install Webcenter and SOA products in separate middleware home itself so that you may also upgrade them independently.
    Regards,
    Anuj

  • Better approach for checking column values between two different rows

    My requirement is to find best Approach to find difference between column values of two different rows.Below I've mentioned two different
    approaches I'm thinking of,but I'd like to know is there any other better approach.
    version details
    SQL> SELECT *
      2  FROM V$VERSION;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Solaris: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionTable creation script
    CREATE TABLE R_DUMMY
       (CA_ID VARCHAR2(16) NOT NULL ENABLE,
         CA_VER_NUM NUMBER(4,2) NOT NULL ENABLE,
         BRWR_SHORT_NAME VARCHAR2(25 CHAR),
         sic_code     number,
         FAC_ID VARCHAR2(10) NOT NULL ENABLE
    / insert script
    insert into r_dummy (CA_ID, CA_VER_NUM, BRWR_SHORT_NAME, sic_code, FAC_ID)
    values ('CA2001/11/0002', 2.00, 'Nandu',1234, 'FA000008');
    insert into r_dummy (CA_ID, CA_VER_NUM, BRWR_SHORT_NAME, sic_code, FAC_ID)
    values ('CA2001/11/0002', 3.00, 'SHIJU',456, 'FA000008');Desired O/P :
    ca_id               fac_id          column_name          previous name          after_modification
    CA2001/11/0002          FA000008     BRWR_SHORT_NAME          Nandu               SHIJU
    CA2001/11/0002          FA000008     sic_code          1234               456My approach
    select      ca_id,fac_id,column_name,
         decode(column_name,'BRWR_SHORT_NAME',lg_brwr,lg_sic) previous_name ,
         decode(column_name,'BRWR_SHORT_NAME',ld_brwr,ld_sic) after_modification
    from
         select
                   case
                        when ld_brwr != lg_brwr then
                        'BRWR_SHORT_NAME'
                        when ld_brwr != lg_brwr then
                        'sic_code'
                   end
              ) column_name,ca_id,fac_id,lg_brwr,ld_brwr,ld_sic,lg_sic
         from     (
              select ca_id,fac_id,lag_brwr,ld_brwr,ld_sic,lag_sic
              from
                        Select      lead(brwr_short_name,1) over(partition by ca_id,fac_id) ld_brwr,
                             lag(brwr_short_name,1) over(partition by ca_id,fac_id) lg_brwr,
                             lead(sic_code,1) over(partition by ca_id,fac_id) ld_sic,
                             lag(sic_code,1) over(partition by ca_id,fac_id) lg_sic,
                             ca_id,fac_id
                        from r_dummy
              where (ld_brwr != lg_brwr or ld_sic != lg_sic)
    )2nd Approach :
    =============
    select      ca_id,fac_id,column_name,
         decode(column_name,'BRWR_SHORT_NAME',lg_brwr,lg_sic) previous_name ,
         decode(column_name,'BRWR_SHORT_NAME',ld_brwr,ld_sic) after_modification
    from
         select
                   case
                        when ld_brwr != lg_brwr then
                        'BRWR_SHORT_NAME'
                        when ld_brwr != lg_brwr then
                        'sic_code'
                   end
              ) column_name,ca_id,fac_id,lg_brwr,ld_brwr,ld_sic,lg_sic
         from     (
              select ca_id,fac_id,brwr_short_name,sic_code
              from
                        Select      ca_id,fac_id,brwr_short_name lg_brwr,sic_code lg_sic
                        from     r_dummy
                        where     ca_ver_num = '2.00'
                   )o,(
                        Select      ca_id,fac_id,brwr_short_name ld_brwr,sic_code ld_sic
                        from     r_dummy
                        where     ca_ver_num = '3.00'
                              )n
              where      0.ca_id = n.ca_id
                   and 0.fac_id = n.fac_id
                   and (ld_brwr != lg_brwr or ld_sic != lg_sic)
    )Hi Experts,
         I've provided sample data where I'm checking for just two columns viz brwr_short_name ,sic_code,but in real time
    I've to check for 8 more columns so please suggest me with a better approach.
    I appreciate your precious suggestions.

    Hi,
    Thanks for posting the CREATE TABLE and INSERT statements; that really helps!
    Here's one wa. Like your 2nd approach, this uses a self-join:
    WITH     got_r_num     AS
         SELECT  ca_id
         ,     ROW_NUMBER () OVER ( PARTITION BY  ca_id
                                   ,                    fac_id
                             ORDER BY        ca_ver_num
                                 )    AS r_num
         ,     brwr_short_name
         ,     TO_CHAR (sic_code)     AS sic_code
         ,     fac_id
    --     ,     ...     -- Other columns (using TO_CHAR if needed)
         FROM     r_dummy
    ,     unpivoted_data     AS
         SELECT     *
         FROM     got_r_num
         UNPIVOT     INCLUDE NULLS
              (    txt
              FOR  column_name IN ( brwr_short_name          AS 'BRWR_SHORT_NAME'
                            , sic_code               AS 'SIC_CODE'
    --                        , ...     -- Other columns
    SELECT       p.ca_id
    ,       p.fac_id
    ,       p.column_name
    ,       p.txt          AS previous_name
    ,       a.txt          AS after_modification
    FROM       unpivoted_data   p
    JOIN       unpivoted_data   a  ON  p.ca_id     = a.ca_id
                           AND p.fac_id     = a.fac_id
                         AND p.column_name     = a.column_name
                         AND p.r_num      = a.r_num - 1
                         AND p.txt || 'X' != a.txt || 'X'
    ORDER BY  a.r_num
    ;To include other columns, add them in the 2 places where I put the comment "Other columns".
    Ca_ver_num can have any values, not just 2.00 and 3.00.
    This will show cases where a value in one of the columns changed to NULL, or where NULL changed to a value.
    There ought to be a way to do this without a separate sub-query like got_r_num. According to the SQL Language manual, you can put expressions in the UPIVOT ... IN list, but when I tried
    ...     UNPIVOT     INCLUDE NULLS
              (    txt
              FOR  column_name IN ( brwr_short_name          AS 'BRWR_SHORT_NAME'
                                 , TO_CHAR (sic_code)     AS 'SIC_CODE'
              )I got the error "ORA_00917: missing comma" right after TO_CHAR. Perhaps someone else can show how to eliminate one of the sub-queries.

  • LAN chain in iptables. Are there better approaches?

    Hi all.
    I'm a newbie in iptables and network security stuff. Would like to get an advice on a following problem.
    I have a router with IP 192.168.1.1, my LAN contains bunch of wireless devices and desktop PC with a static IP *.2.
    I want to enable certain services (ftp, sftp for local user, game servers, etc.) on my desktop PC to be accessible from any of my wireless devices.
    Though, I don't want them to be accessible from the router, because I want to be safe just in case if router gets hacked (router has DDNS enabled and runs sshd for tunneling purposes).
    It is not actually safety that bothers me a lot. I'm just trying to gain some understanding on topic, so I decided to make this particular setup.
    I've read that Simple Stateful Firewall article on wiki and now I'm considering doing the following, but not sure, whether this is good approach:
    # create chains
    iptables -N LAN
    iptables -N LAN_TCP
    iptables -N LAN_UDP
    # route all traffic from wireless devices to LAN chain
    iptables -A INPUT -m iprange --src-range 192.168.1.3-192.168.1.255 -j LAN
    # specific LAN chain rules
    iptables -A LAN -p tcp --syn -m conntrack --ctstate NEW -j LAN_TCP
    iptables -A LAN_TCP -p tcp --dport 22 -j ACCEPT
    Is it worthwhile? Are there better approaches? I suspect, that if router gets hacked, hacker will be able to change its IP, so such rules won't work, will they?
    Just thought, that perhaps restricting by routers mac would be a better approach. Though I've wrote a lot of text already... So, anyway, would like to get comments from forum members
    Thanks in advance.

    That should work, although don't forget to DROP or REJECT by default:
    iptables -P INPUT DROP
    iptables / netfilter is very flexible and you can achieve any given task a number of ways. There are generally no "right" and "wrong" ways, just best practices here and there.

  • 1:N Replication Scenario Approach

    Hi,
    We plan to split HANA and like to consider the parallel run option whereby we build the new hardware and do a migration/restore from existing Production onto the new box. Then we like to switch on SLT for both the existing and new HANA boxes and run the models in parallel to compare the results before we switch off the old one.
    This would mean that we need to setup new SLT triggers for the new server whilst the existing triggers are still running.The current configuration for SLT -- Allow Multiple Usage is not selected.How we can perform this in our landscape?What would be the best approach?

    Hi Roy,
    look at this note. it describes how to setup 1:n once you already created the config without "multiple usage" flagged.
    Best,
    Tobias
    SAP Note 1898479:
    SLT replication: Redefinition of existing DB triggers

  • Proxy or File which is better approach

    Hi
    I am using PI 7.1. I need to pass some informations (approx. 40K records per day) from SAP CRM database to a third party application. The target communication will be file. What is the best approach for source communication? Is proxy the better option or should I write an extraction programme in SAP CRM, generate one file, and then do the transformations required in PI?
    Regards,
    Nirupam

    HI,
    First of all as this is adapter less communication, I think it is better to use in as many senario as possible. Why only for big messages. There is no limit on number of Proxy senario count ;).
    Secondly, You need ABAPer for custom ABAP program to extarct and create file. So in either case you need both ABAP and PI skill.
    Monitoring will be using same tools no extra tool is needed for proxy.
    And If you use Proxy, in future even if you message size increses you dont have to bother.
    If any change is there in the interface then also you need both ABAPer and PI Skill to make the changes.
    Hence I feel, whenever possible, it is better to use Proxy.
    Shweta.

  • Database design - Better approach than XML/ XSD

    Hi, I am designing web based application. The scenario I am working on is - I have around 50+ odd objects. They have few common things but other fields will change (Say e.g. Employee / Customers/ Assets, etc). We are also providing a facility where
    in customer can add / rename / delete table column from UI. We are planning to use XML with XSD/ XQuery and storing all objects in 1 table only to accommodate dynamic schema issue. But it has performance impact. Is there any other better way to handle this
    situation?

    Basically, it really depends on what you are doing with the data. XML isn't so bad when you are just attaching some data onto an object, but when you need to use it in queries on a lot of rows, it just seems really cumbersome.
    My favorite way is to just add columns to the table, probably using sparse columns. That gives you column level access like normal (including indexes and constraints), and an XML schema method to put data in using XML style access.
    Entity Attribute Value (http://en.wikipedia.org/wiki/Entity-attribute-value_model) style works pretty well too. Basically you store the key of an object, the name of an attribute, and
    the value (and perhaps the name of the entity, but ideally I would suggest one table per object you are extending.) It is more cumbersome to query, but works really naturally for front end tools.
    However... One thing I think I hear from your requirements is that you are shooting to have 1 table. I would strongly suggest against that approach using SQL Server. SQL Server's primary strength is as a relational engine. Sure, it is more work to need to
    adjust your schema when requirements change, but if you can get the base schema down, having these flexible elements to the implementation are not so damaging to performance.
    For example, say you are building a database of students. Students, teachers, classrooms, subjects, etc, all are pretty much known. But there are a lot of attributes that may be different between usages, so adding a flexible element is very useful for the
    end users. If you really want a tool with a flexible back end (something I am very much against unless you keep the rowcount low), look at how ORM tools create their data storage using tables. SQL Server itself is really and truly a flexible data storage platform.
    Using CREATE TABLE, ALTER TABLE, etc, you can fashion almost any structures on the fly...
    Louis
    Without good requirements, my advice is only guesses. Please don't hold it against me if my answer answers my interpretation of your questions.

  • Custom forms - Better approach?

    I've various forms for which metadata fields are different. What is the best approach to accomplish this?
    1. Create HCST / HCSP and store data in XML? (this way I don't have to write code to store the data in the database and retrieve it everytime user opens the document)
    2. Create JSP and store the metadata in database? (I've to write code to store the data in the database and retrieve it everytime user opens the document, but I'll be able to query through large meta-data fairly quickly)
    I'm also wondering should I add the meta data field via configuration manager, everytime I've a new form or should it be restricted.

    Hi Anusha,
    considering the reusability aspect the components approach is the much better one (see also the best practices dev guide chapter regarding components SAPUI5 SDK - Demo Kit).
    It allows you to reuse that component in different applications or other UI components.
    I also think that the Application.js approch will not work with Fiori, cause the Fiori Launchpad loads the Components.js of the Fiori app in an Component Container.
    Best Regards, Florian

Maybe you are looking for

  • What's the maximum support disk size on v440

    Hi, My v440 have now 4x72GB disks, I want to replace them with 300GB or more. Did any one tested it? I know 146GB disks are supported but not enough. Thanks..

  • How much space does do I need?

    I want to create a website, and post it to the internet via. FTP protocol. So I am probably buying a webhotel and domain from www.one.com, as it seems really cheap, and that's what I need. The question then becomes, how much web space do I need. Is t

  • PC video capture card - Not Applicable?

    When I was a Comcast analog customer, I purchased a Hauppauge WinTV-HVR-1600 video capture card. The unit worked great converting my Media Center PC back into a DVR when my original Dell equipment All-In-Wonder capture card died. When I was consideri

  • No possible to seek with timeline in a song

    the music-app on my nokia c7 is really perfect. -old, not supported but really useful with many functions. But on the sony walkman i still miss the timeline function to seek and listen in a song. why isn't it possilbe like other (free) apps?

  • Plz help me on my n82 series problem

    plz help me my n82 when i open gallery and open the pic this message always appear to me "Gallery: Memory full. Close some applications and try again" i formated my phone and my memory and still appear my software :v35.0.002