Cautious update of variables – best approach?

So ... I have a stack of user variables, some of which keep the same value in every chapter while others are manually adjusted. File > Import formats from a "good" file will trample on the manual variables in the others. Short of setting up a quick todelete.fm with just the one user variable I want to flood through the book, is there an easy way of updating just one? I suspect "plugin" may be the answer <g>

Niels,
How about Mif snippets, e.g.
  <MIFFile 8.00>
<VariableFormats
<VariableFormat
   <VariableName `MyVariable'>
   <VariableDef `Some product name'>
  > # end of VariableFormat
> # end of VariableFormats

Similar Messages

  • What is the best approach to converting LV7.1 tags to LV2012 shared variables in multiple VIs?

    What is the best approach to upgrading from LV7.1/DSC tags to LV2012/DSC shared variables, in multiple VIs running on multiple platforms? Our system is composed of  about 5 PCs running Windows 2000/LV7.1 Runtime, plus a PLC, and a main controller running XP/SP3/LV2012. About 3 of the PCs publish sensor information via tags across the LAN to the main controller. Only the main controller is currently being upgraded. Rudimentary questions:
    1. Will the other PCs running the 7.1 RTE (with tags) be able to communicate with the main controller running 2012 (shared variables)?
    2. Is it necessary to convert from tags to shared variables, or will the deprecated legacy tag VIs from LV7.1 work in LV2012?
    3. Will all the main controller VIs need to be incorporated into a project in order to use shared variables?
    4. Is the only way to do this is to find all tag items and replace them with shared variable items?
    Thanks in advance with any information and advice!
    lb
    Solved!
    Go to Solution.

    Hi lb,
    We're glad to hear you're upgrading, but because there was a fundamental change in architecture since version 7.1, there will likely be some portions that require a rewrite. 
    The RTE needs to match the version of DSC your using.  Also, the tag architecture used in 7.1 is not compatible with the shared variable approach used in 2012.  Please see the KnowledgeBase article Do I Need to Upgrade My DSC Runtime Version After Upgrading the LabVIEW DSC Module?
    You will also need to convert from tags to shared variables.  The change from tags to shared variables took place in the transition to LabVIEW 8.  The KnowledgeBase Migrating from LabVIEW DSC 7.1 to 8.0 gives the process for changing from tags to shared variables. 
    Hope this gets you headed in the right direction.  Let us know if you have more questions.
    Thanks,
    Dave C.
    Applications Engineer
    National Instruments

  • I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    I have a MacBook Pro 5,4 running OSX 10.6.8 and Safari 5.1.10. A website i like has a known bug with 5.1.10 and recommends I install a newer version of Safari or use Firefox or Chrome. Just looking for advice on the best approach. Thanks!

    Unfortunately, Safari cannot be updated past 5.1.10 on a Mac running v10.6.8.
    So, the options are to upgrade to a newer OS X or use Firefox or  Chrome.
    Be aware, Apple no longer support Snow Leopard v10.6 >  www.ibtimes.com/apple-kills-snow-leopard-os-x-106-no-longer-receives-security-u pdates-1558393
    See if your Mac can run v10.9 Mavericks >  OS X Mavericks: System Requirements
    If so, you can download and install Mavericks for free from the App Store.
    Read prior to upgrading >   Upgrading to 10.7 and above, don't forget Rosetta! | Apple Support Communities

  • What's the best approach for handeling about 1300 connections in Oracle.

    What's the best approach for handling about 1300 connections in Oracle 9i/10g through a Java application?
    1.Using separate schema s for various type users(We can store only relevant data with a particular schema.     Then No. of records per table can be reduced by replicating tables but we have to maintain all data with a another schema     Then we need update two schema s for a given session.Because we maintain separate scheama for a one user and another schema for all data and then there may be Updating problems)
    OR
    2. Using single schema for all users.
    Note: All users may access the same tables and there may be lot of records than previous case.
    What is the Best case.
    Please give Your valuable ideas

    It is a true but i want a solution from you all.I want you to tell me how to fix my friends car.

  • Best approach to return Large data ( 4K) from Stored Proc

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

  • What are the best approaches for mapping re-start in OWB?

    What are the best approaches for mapping re-start in OWB?
    We are using OWB repository 10.2.0.1.0 and OWB client 10.2.0.1.31. The Oracle version is 10 G (10.2.0.3.0). OWB is installed on Linux.
    We have number of mappings. We built process flows for mappings as well.
    I like to know, what are the best approches to incorportate re-start options in our process. ie a failure of mapping in process flow.
    How do we re-cycle failed rows?
    Are there any builtin features/best approaches in OWB to implement the above?
    Does runtime audit tables help us to build re-start process?
    If not, do we need to maintain our own tables (custom) to maintain such data?
    How did our forum members handled above situations?
    Any idea ?
    Thanks in advance.
    RI

    Hi RI,
    How many mappings (range) do you have in a process flows?Several hundreds (100-300 mappings).
    If we have three mappings (eg m1, m2, m3) in process flow. What will happen if m2 fails?Suppose mappings connected sequentially (m1 -> m2 -> m3). When m2 fails then processflow is suspended (transition to m3 will not be performed). You should obviate cause of error (modify mapping and redeploy, correct data, etc) and then repeat m2 mapping execution from Workflow monitor - open diagram with processflow, select mapping m2 and click button Expedite, choose option Repeat.
    In re-start, will it run m1 again and m2 son on, or will it re-start at row1 of m2?You can specify restart point. "at row1 of m2" - I don't understand what you mean (all mappings run in Set based mode, so in case of error all table updates will rollback,
    but there are several exception - for example multiple target tables in mapping without corelated commit, or error in post-mapping - you must carefully analyze results of error).
    What will happen if m3 fails?Process is suspended and you can restart execution from m3.
    By having without failover and with max.number of errors=0, you achieve re-cycle failed rows to zero (0).This settings guarantee existence only two return result of mapping - SUCCSES or ERROR.
    What is the impact, if we have large volume of data?In my opinion for large volume Set based mode is the prefered processing mode of data processing.
    With this mode you have full range enterprise features of Oracle database - parallel query, parallel DML, nologging, etc.
    Oleg

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Best approach to implement this requirement in OIM

    Hi experts
    We have an application(say App1) in which we use GTC DB connector to provision users from OIM. But due to some limitations in App1, the communication has to be changed from GTC to custom webservices one.
    For this we have done the following configurations
    1. Introduced new mandatory fields to same process form of App1(as per the req.)
    2. Cleaned all the GTC adapters in the process def of App1 and attached the webservice related adapters.
    So after introducing webservices, the known issues could be
    1. Revoke user fails for existing users since the existing users are already provisioned using GTC conn.
    The reason is, by default if we revoke a user then the revoke user task contains newly created Webservices adapter which will not complete the operation saying that mandatory values are missing for the fields in process form.
    2. Same is the case for Disable/Enable users.
    what would be the best approach to perform the above said operations for existing users who are already provisioned.
    we are not willing to go for creating new Resource Object for webservices communication.
    Any valuable pointers are highly appreciated.

    Hi
    I am trying to run the FVC utility to update the process form fields for old users.
    The FVC properties file looks like
    fvc.properties
    ResourceObject;GTCConnector
    FormName;UD_GTCFORM
    FromVersion;v1.0
    ToVersion;v3.0
    Parent;UD_GTCFORM_DUMMYFIELD;Default;Update
    The field UD_GTCFORM_DUMMYFIELD has been created in latest active version v3.0. I am trying to update some value to older versions of the process form
    When I run the script fvcutil_websphere.cmd, it is throwing me error message below.
    WSCL0100E: Exception received: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at com.ibm.ws.client.applicationclient.launchClient.createContainerAndLa
    unchApp(launchClient.java:747)
    at com.ibm.ws.client.applicationclient.launchClient.main(launchClient.ja
    va:469)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183)
    at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90)
    at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72)
    at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformAct
    ivator.java:226)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja
    va:376)
    at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.ja
    va:163)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
    java:79)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
    sorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:618)
    at org.eclipse.core.launcher.Main.invokeFramework(Main.java:334)
    at org.eclipse.core.launcher.Main.basicRun(Main.java:278)
    at org.eclipse.core.launcher.Main.run(Main.java:973)
    at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.jav
    a:245)
    at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:73)
    at com.ibm.websphere.client.applicationclient.launchClient.main(launchCl
    ient.java:238)
    Caused by: java.util.NoSuchElementException
    at java.util.StringTokenizer.nextToken(StringTokenizer.java:347)
    at com.thortech.xl.util.fvcutil.FVCUtil.initialize(Unknown Source)
    at com.thortech.xl.util.fvcutil.FVCUtil.<init>(Unknown Source)
    at com.thortech.xl.util.fvcutil.FVCUtil.main(Unknown Source)
    ... 26 more
    Does any one came across this type of error.
    Thanks in Advance

  • Best approach for performing DMLs using stored procedures

    Hi,
    I have a really general question and would like to hear your say about this.
    I want my application to manipulate or read data using stored procedures (or packages in that manner) and not directly using queries against the DB.
    Let's say I have a table with many columns:
    create table test (pkid number(10),col1 varchar2(30), col2 number(10), col3 date,...);For such a DML procedure, is it best to do something like
    procedure do_update(i_pkid IN number,i_col1 IN varchar2, i_col2 IN number, i_col3 IN date,...) as
    begin
       update test
       set col1=i_col1,
            col2=i_col2,
            col3=i_col3...
       where pkid=i_pkid;
       commit;
    end;Or do a selective update, meaning update only a certain column every time, given only 1 column actually changes? (and how to do that - separate procedures for each column? [columns can be nulls])
    Also, is it better to work with test.col1%type instead of specifying the data type in the procedure?
    And one last question - If I have a table with 100 columns and I don't want to create a procedure with 100 parameters - the best approach would be to use a record?
    I just need to be set on the way I start implementing things in order to do it well from the start.
    Many thanks.
    Edited by: Pyrocks on Nov 17, 2010 1:58 PM

    Pyrocks wrote:
    One last clarification (although it may be more related to c/c++ developers - maybe one of you will know):
    We are working with C++ and VB against a SQLServer and my part is to translate all the existing procedures to Oracle in order to migrate the application to work with Oracle DB.
    The existing procedures use an IN parameter for each column in the table and I would really like to use rowtype like you mentioned.
    Since I'm not a c/c++/vb developer - is there an easy way of working with such types, or even User-Defined Types, from c/c++/vb (as in passing a rowtype record from c++ to a SP ?)
    I'm not looking for the actual way - just want to know how hard it would be and how much code needs to be changed in order to be able to convince the developers that this is the RIGHT way to work.
    PS. none of our developers have experience with ORACLE so they wouldn't know the answer...Not actually an Oracle server-side (SQL language or PL/SQL language) question - but a client one. And it has been a long time since I wrote a fat client using C/C++ or Delphi.
    The OCI (<i>Oracle Call Interface</i>) supports advance (user defined) SQL data types. Has since Oracle 8i. So in that respect, yes the client can support custom SQL data types.
    How well it does depends entirely on that client language's features wrt OCI integration. For example, Delphi 4 was release around Oracle 8i and supported custom SQL types. I would expect that most languages today (like Java and C#) will provide support for it.
    As for usiong +%ROWTYPE+ - this is a PL/SQL clause as far as I know. Unsure whether it is supported by the OCI. What could support it is a pre-compiler like Pro*C. These enable you to mix pseudo SQL source code with client language source code. The pre-compilation step then replaces the pseudo SQL code with native client language calls to the OCI. The code is then compiled by that client language's compiler. Pre-compilers can pull all kinds of interesting "tricks" with their pseudo SQL code support.
    The best would be to consult the applicable client language's manuals that describe the interface it supports (via OCI) to Oracle.

  • Best approach to migrate from 9.2.0.1 to 9.2.0.6

    Hi friends.
    Actually I have a production database running on Oracle 9.2.0.1 in Suse Linux. I bought a new server and I have to migrate my database from the old one. In this new server I´ll have Oracle 9.2.0.6 and RedHat. What is the best approach to make the migration?
    - By export/import? Will I have problems wiht users/grants/synonyms, etc, since I wont be able to use full import cause the system users?
    - Update de 9.2.0.1 to 9.2.0.6 and to copy the database to new server?
    - Copy the database to new server and update only the database to 9.2.0.6 by scripts?
    - Anything else??? :)
    Thx in advance.

    I'm not sure what you mean by " I wont be able to use full import cause the system users". I can't seem to understand what you're trying to say here.
    If you have a relatively small database, exporting and importing is probably the simplest option. You can also restore a backup from the old machine to the new machine and upgrade the new machine in place, use transportable tablespaces, patch the old database and set up the new system as a standby database, etc. There are plenty of options, we'd need more information about how much data we're talking about, what sort of down time window you have, whether you're concerned about speed, administrative complexity, etc.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Best approach to publish new table or new column on existing table on MDW?

    Hi,
    I'm refering to Olite R3 without any patches. I don't use Java API, I use MDW.
    if I have a new table or a new column on a existing table, what's the best approach to publish it?
    I'm asking this because I've trying lots of approaches and the only solution was, step-by-step:
    1) On MDW, drop the publication item
    2) Add again the publication item
    3) Associate the publication item to the publication
    4) Save everything
    5) File / Deploy (if I don't do it, it does not work)
    6) Tools/Package... (that's where it's a problem: if I don't remove the app and create it again it does not work!)
    7) on the client side, I perform a msync with "force refresh"
    That's the only way I found to publish new items for sure. Any other action does not push the new table or new column to the client's embbeded DB.
    Any comments?
    Regards,
    Maurício Américo Vernaschi.

    I do not use MDW, rather a mix of java and the final publish step you use, but
    Adding new PIs should be easy, just add them and re-publish (no need to drop anything)
    for changes, if you just have new columns and the sql statement is 'select * from' then you should just need to make the changes in the base schema objects, and run the publish with no changes and the updates should be picked up. If selecting specific columns, then update and re-publish.
    When using MDW at the end you can save the application as a jar file, and then use this jar file to publish in the mobile manager - this is the best wayto publish.
    Have a look at this jar file in winzip, and you will find it contains a web.xml file. This is the xml definition of the publication items, and for simple changes it is possible to just edit this file and republish via the mobile manager

  • Best approach to change historichal data???

    Hi experts,
    As a requirement of our project, R/3 will periodically do a material (key) value change upon existing product, that is, update the product code with a new one that substitutes the old one.
    Ex: Material A (code M10013) turns Material B (code M10014).
    We want to realign all historichal data stored in BW from the old material code (M10013) to the new one (M10014).
    Question is whether there is a standard way to do this within BW.
    If not, what would be the best approach to dela with this?
    Thanks in advance for your help,
    Best regards,
    Enric
    Note: I have faced this same requirement in APO, and there's a Realignment functionality to convert CVCs, but I'm not sure if we have a similar tool in BW.

    Hi,
    Use Data Mart Concept.
    Load all the historical data of the old with the new material code to a new ODS
    then again pull the data from the new ODS to the existing one.
    hope this helps,
    regards
    suresh

  • What is the best approach to return Large data from Stored Procedure ?

    no answers to my original post, maybe better luck this time, thanks!
    We have a stored proc (Oracle 8i) that:
    1) receives some parameters.
    2)performs computations which create a large block of data
    3) return this data to caller.
    compatible with both ASP (using MSDAORA.Oracle), and ColdFusion (using Oracle ODBC driver). This procedure is critical in terms of performance.
    I have written this procedure as having an OUT param which is a REF CURSOR to a record containing a LONG. In order to make this work, at the end of the procedure I have to store the working buffer (an internal LONG variable) into a temp table, and then open the cursor as a SELECT from the temp table.
    I have tried to open the cursor as a SELECT of the working buffer (from dual) but I get the error "ORA-01460: unimplemented or unreasonable conversion requested"
    I suspect this is taking too much time; any tips about the best approach here? is there a resource with REAL examples on returning large data?
    If I switch to CLOB, will it speed the process, be compatible with callers, etc ? all references to CLOB I saw use trivial examples.
    Thanks for any help,
    Yoram Ayalon

    Create a new farm in the secondary Data Center at the same patch level with the desired configuration. Replicate the databases using the method of choice (Mirroring, AlwaysOn, etc.). Create a downtime window where you can then attach the databases to the
    new farm's Web Application(s)/Service Application(s).
    Trevor Seward
    Follow or contact me at...
    &nbsp&nbsp
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

  • Looking for best approach to implement all ERP and CRM Analytics

    There is a scope to implement all the ERP Analytics (HR, Finance, SCM, Procurement & Spend, Project) and CRM Analytics (Sales, service, Marketing, Contact Center, Price, Loyalty ). Just want to know the best approach for impelmentation of both ERP and CRM analytics
    1- Work with two separate containers with separate execution plan for ERP and CRM
    2- Work with one container with one execution plan for both ERP and CRM
    3- Work with one container with separate execution plans for both ERP and CRM
    Also, Please let me know if there any standard document to achieve both the implementations simultaneously.
    Thanks!

    Answer, as always, is "it depends". Here is a list of things to consider w the 3 approaches you mentioned:
    1- Work with two separate containers with separate execution plan for ERP and CRM
    This is a good approach if you plan on a different frequency of ETL loads for each container or different timings. As long as the targets are the same and you run INCREMENTAL loads from both containers, then the refresh dates will be updated. This is usually a good idea if your ERP and CRM users have different timing and load requirements and do not want to "interfere" w each other. You will also have DATASOURCE NUM ID uniquely defined for each system so the data will be segregated. By default, each source system will have its own container
    2- Work with one container with one execution plan for both ERP and CRM
    This is good if you are ok w the same exact load time for both. Its a simple approach and all tables will be loaded at the same time to avoid any confusion.
    3- Work with one container with separate execution plans for both ERP and CRM
    Not sure what the advantage of this option versus keeping them in separate containers (like Option 1).
    I would also review the secion in the DAC guide for "MultiSource Execution Plans"
    if helpful, pls mark correct

  • Best approach to creating a TOC for product catalog using data merge

    What is the best approach for creating a TOC for a product catalog (over 1,000 items) using Data Merge?
    The TOC would contain the product Categories. 
    So for example, Category A items could go from pages 1 - 3, and Category B items would start at pg 4, but if new items were added to Category A, then Category B may start from pg 6. 
    From the Data Source, there are 5 Data Fields I've chosen to be displayed.  If this were a regular digital print document, I could use the Paragraph Style method for creating a TOC, but if I make any one of the Data Fields a certain Paragraph Style and use that for the TOC, it'll populate the TOC with that Data Field for all the items. 
    Any suggestions?

    Peter Spier wrote:
    TOC is not interactive in the ID file, though it can be in a PDF that you export (there's a checkbox to create PDF bookmarks). You might want to think about using Cross-references (rather than hyperlinks, I think) to build the TOC. You have to do it manually, but once done it should maintain itself, whereas a TOC is built automatically, but must be regenerated after you edit the doc.
    One caveat witih TOCs created from cross-references: Although changing the text of an x-ref source paragraph (for example from "Patatas and tamatas" to "Tomatoes and Potatoes,"and/or when the source paragraph flows to the next or previous page) update automatically or when invoking "Update cross-references," MOVING a cross-reference source paragraph to a location before or after another source paragraph, does not change their sequence in the pseudo-TOC. You'll need to manually move the reference in the pseudo-TOC to the correct position in the sequence of cross-refs. So, put the task of checking the order of x-refs in the pseudo-TOC on your before hand-off check list.
    HTH
    Regards,
    Peter
    Peter Gold
    KnowHow ProServices

Maybe you are looking for