Steps of normalization

hi ,
please explain me the what is normalization and denormalization with the examples.
thanks

Please go to the following links.
http://databases.about.com/od/specificproducts/a/normalization.htm
http://www.databasejournal.com/sqletc/article.php/1428511

Similar Messages

  • Oracle Data Mining - How to use PREDICTION function with a regression model

    I've been searching this site for Data Mining Q&A specifically related to prediction function and I wasn't able to find something useful on this topic. So I hope that posting it as a new thread will get useful answers for a beginner in oracle data mining.
    So here is my issue with prediction function:
    Given a table with 17 weeks of sales for a given product, I would like to do a forecast to predict the sales for the week 18th.
    For that let's start preparing the necessary objects and data:
    CREATE TABLE T_SALES
    PURCHASE_WEEK DATE,
    WEEK NUMBER,
    SALES NUMBER
    SET DEFINE OFF;
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('11/27/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 1, 55488);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/04/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 2, 78336);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/11/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 3, 77248);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/18/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 4, 106624);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('12/25/2010 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 5, 104448);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/01/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 6, 90304);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/08/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 7, 44608);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/15/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 8, 95744);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/22/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 9, 129472);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('01/29/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 10, 110976);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 11, 139264);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 12, 87040);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 13, 47872);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('02/26/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 14, 120768);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('03/05/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 15, 98463.65);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('03/12/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 16, 67455.84);
    Insert into T_SALES
    (PURCHASE_WEEK, WEEK, SALES)
    Values
    (TO_DATE('3/19/2011 23:59:59', 'MM/DD/YYYY HH24:MI:SS'), 17, 100095.66);
    COMMIT;
    There are a lot of linear regression models and approaches for sales forecast out on the market, however I will focus on what oracle 11g offers i.e. package SYS.DBMS_DATA_MINING to create a model using regression as mining function and then, once the model is created, to apply prediction function on the model.
    Therefore I'll have to go through few steps:
    i) normalization of data
    CREATE OR REPLACE VIEW t_sales_norm AS
    SELECT week,
    sales,
    (sales - 91423.95)/27238.3693126778 sales_norm
    FROM t_sales;
    whereas the numerical values are the mean and the standard deviation:
    select avg(sales) from t_sales;
    91423.95
    select stddev(sales) from t_sales;
    27238.3693126778
    ii) auto-correlation. For the sake of simplicity, I will safely assume that there is no auto-correlation (no repetitive pattern in sales among the weeks). Therefore to define the lag data I will consider the whole set:
    CREATE OR REPLACE VIEW t_sales_lag AS
    SELECT a.*
    FROM (SELECT week,
    sales,
    LAG(sales_norm, 1) OVER (ORDER BY week) L1,
    LAG(sales_norm, 2) OVER (ORDER BY week) L2,
    LAG(sales_norm, 3) OVER (ORDER BY week) L3,
    LAG(sales_norm, 4) OVER (ORDER BY week) L4,
    LAG(sales_norm, 5) OVER (ORDER BY week) L5,
    LAG(sales_norm, 6) OVER (ORDER BY week) L6,
    LAG(sales_norm, 7) OVER (ORDER BY week) L7,
    LAG(sales_norm, 8) OVER (ORDER BY week) L8,
    LAG(sales_norm, 9) OVER (ORDER BY week) L9,
    LAG(sales_norm, 10) OVER (ORDER BY week) L10,
    LAG(sales_norm, 11) OVER (ORDER BY week) L11,
    LAG(sales_norm, 12) OVER (ORDER BY week) L12,
    LAG(sales_norm, 13) OVER (ORDER BY week) L13,
    LAG(sales_norm, 14) OVER (ORDER BY week) L14,
    LAG(sales_norm, 15) OVER (ORDER BY week) L15,
    LAG(sales_norm, 16) OVER (ORDER BY week) L16,
    LAG(sales_norm, 17) OVER (ORDER BY week) L17
    FROM t_sales_norm) a;
    iii) choosing the training data. Again, I will choose the whole set of 17 weeks, as for this discussion in not relevant how big should be the set of training data.
    CREATE OR REPLACE VIEW t_sales_train AS
    SELECT week, sales,
    L1, L2, L3, L4, L5, L6, L7, L8, L9, L10,
    L11, L12, L13, L14, L15, L16, L17
    FROM t_sales_lag a
    WHERE week >= 1 AND week <= 17;
    iv) build the model
    -- exec SYS.DBMS_DATA_MINING.DROP_MODEL('t_SVM');
    BEGIN
    sys.DBMS_DATA_MINING.CREATE_MODEL( model_name => 't_SVM',
    mining_function => dbms_data_mining.regression,
    data_table_name => 't_sales_train',
    case_id_column_name => 'week',
    target_column_name => 'sales');
    END;
    v) finally, where I am confused is applying the prediction function against this model and making sense of the results.
    On a search on Google I found 2 ways of applying this function to my case.
    One way is the following:
    SELECT week, sales,
    PREDICTION(t_SVM USING
    LAG(sales,1) OVER (ORDER BY week) as l1,
    LAG(sales,2) OVER (ORDER BY week) as l2,
    LAG(sales,3) OVER (ORDER BY week) as l3,
    LAG(sales,4) OVER (ORDER BY week) as l4,
    LAG(sales,5) OVER (ORDER BY week) as l5,
    LAG(sales,6) OVER (ORDER BY week) as l6,
    LAG(sales,7) OVER (ORDER BY week) as l7,
    LAG(sales,8) OVER (ORDER BY week) as l8,
    LAG(sales,9) OVER (ORDER BY week) as l9,
    LAG(sales,10) OVER (ORDER BY week) as l10,
    LAG(sales,11) OVER (ORDER BY week) as l11,
    LAG(sales,12) OVER (ORDER BY week) as l12,
    LAG(sales,13) OVER (ORDER BY week) as l13,
    LAG(sales,14) OVER (ORDER BY week) as l14,
    LAG(sales,15) OVER (ORDER BY week) as l15,
    LAG(sales,16) OVER (ORDER BY week) as l16,
    LAG(sales,17) OVER (ORDER BY week) as l17
    ) pred
    FROM t_sales a;
    WEEK, SALES, PREDICTION
    1, 55488, 68861.084076412
    2, 78336, 104816.995823913
    3, 77248, 104816.995823913
    4, 106624, 104816.995823913
    As you can see for the first row there is a value of 68861.084 and for the rest of 16 values is always one and the same 104816.995.
    Question: where is my week 18 prediction ? or maybe I should say which one is it ?
    Another way of using prediction even more confusing is against the lag table:
    SELECT week, sales,
    PREDICTION(t_svm USING a.*) pred
    FROM t_sales_lag a;
    WEEK, SALES, PREDICTION
    1, 55488, 68861.084076412
    2, 78336, 75512.3642096908
    3, 77248, 85711.5003385927
    4, 106624, 98160.5009687461
    Each row out of 17, its own 'prediction' result.
    Same question: which one is my week 18th prediction ?
    Thank you very much for all help that you can provide on this matter.
    It is as always highly appreciated.
    Serge F.

    Kindly let me know how to give input to predict the values for example script to create model is as follows
    drop table data_4svm
    drop table svm_settings
    begin
    dbms_data_mining.drop_model('MODEL_SVMR1');
    CREATE TABLE data_4svm (
    id NUMBER,
    a NUMBER,
    b NUMBER
    INSERT INTO data_4svm VALUES (1,0,0);
    INSERT INTO data_4svm VALUES (2,1,1);
    INSERT INTO data_4svm VALUES (3,2,4);
    INSERT INTO data_4svm VALUES (4,3,9);
    commit;
    --setting table
    CREATE TABLE svm_settings
    setting_name VARCHAR2(30),
    setting_value VARCHAR2(30)
    --settings
    BEGIN
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.algo_name, dbms_data_mining.algo_support_vector_machines);
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.svms_kernel_function, dbms_data_mining.svms_linear);
    INSERT INTO svm_settings (setting_name, setting_value) VALUES
    (dbms_data_mining.svms_active_learning, dbms_data_mining.svms_al_enable);
    COMMIT;
    END;
    --create model
    BEGIN
    DBMS_DATA_MINING.CREATE_MODEL(
    model_name => 'Model_SVMR1',
    mining_function => dbms_data_mining.regression,
    data_table_name => 'data_4svm',
    case_id_column_name => 'ID',
    target_column_name => 'B',
    settings_table_name => 'svm_settings');
    END;
    --to show the out put
    select class, attribute_name, attribute_value, coefficient
    from table(dbms_data_mining.get_model_details_svm('MODEL_SVMR1')) a, table(a.attribute_set) b
    order by abs(coefficient) desc
    -- to get predicted values (Q1)
    SELECT PREDICTION(MODEL_SVMR1 USING *
    ) pred
    FROM data_4svm a;
    Here i am not sure how to predict B values . Please suggest the proper usage . Moreover In GUI (.NET windows form ) how user can give input and system can respond using the Q1

  • Multi table inheritance and performance

    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.

    Stefan-
    Thanks for the feedback. Note that 3.0 does make this simpler: we have
    extensions that allow you to define the mechanism for subclass
    identification purely in the metadata file(s). See:
    http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
    The idea for having a separate table mapping numbers to class names is
    good, but we prefer to have as few Kodo-managed tables as possible. It
    is just as easy to do this in the metadata file.
    In article <[email protected]>, Stefan wrote:
    First of all: thx for the fast help, this one (IntegerProvider) helped and
    solves my problem.
    kodo is really amazing with all it's places where customization can be
    done !
    Anyway as a wish for future releases: exactly this technique - using
    integer as class-identifiers rather than the full class names is what I
    meant with "normalization".
    The only thing missing, is a table containing information of how classIDs
    are mapped to classnames (which is now contained as an explicit statement
    in the jdo-File). This table is not mapped to the primary key of the main
    table (as you suggested), but to the classID-Integer wich acts as a
    foreign key.
    A query for a specific class would be solved with a query like:
    select * from classValues, classMapping where
    classValues.JDOCLASSX=classmapping.IDX and
    classmapping.CLASSNAMEX='de.company.whatever'
    This table should be managed by kodo of course !
    Imagine a table with 300.000 rows containing only 3 different derived
    classes.
    You would have an extra table with 4 rows (base class + 3 derived types).
    Searching for the classID is done in that 4row table, while searching the
    actual class instances than would be done over an indexed integer-classID
    field.
    This is much faster than having the database doing 300.000 String
    comparisons (even when indexed).
    (By the way - it would save a lot memory as well, even on classes which
    are not derived)
    If this technique is done by kodo transparently, maybe turned on with an
    extra option ... that would be great, since you wouldn't need to take care
    for different "subclass-indicator-values", can go on as everytime and have
    a far better performance ...
    Stephen Kim wrote:
    You could push off fields to seperate tables (as long as the pk column
    is the same), however, I doubt that would add much performance benefit
    in this case, since we'd simply add a join (e.g. select data.name,
    info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
    info.jdoclassx = 'foo'). One could turn off default fetch group for
    fields stored in data, but now you're adding a second select to load one
    "row" of data.
    However, we DO provide an integer subclass provider which can speed
    these sorts of queries a lot if you need to constrain your queries by
    class, esp. with indexing, at the expense of simple legibility:
    http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
    Stefan wrote:
    I really like the idea of multi-table inheritance, since a have a main
    class and three subclasses which just add one integer to the main class.
    It would be a waste to spend 4 tables on this, so I decided to put them
    all into one.
    My problem now is, that when I query for a specific class, kodo will build
    SQL like:
    select ... from table where
    JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
    this is pretty slow, when the table grows because string comparisons are
    awefull - and even worse: the database has to compare nearly the whole
    string because it differs only in the last letters.
    indexing would help a bit but wouldn't outperforming integer comparisons.
    Is it possible to get kodo to do one more step of normalization ?
    Having an extra table containing all classnames und id's for them (and
    references in the original table) would improve performance of
    multi-tables quite a lot !
    Even with standard classes it would save a lot memory not having the full
    classname in each row.
    Steve Kim
    [email protected]
    SolarMetric Inc.
    http://www.solarmetric.com
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • New PC and diff. Drive letter for iTunes (Steps to take, please comment)

    I have bought a new PC, and I want to transfer my library to my new pc but the problem is that the Driveletter will be different
    (On my Old PC, Music is located on then E: Drive and on New Computer the music will be located on the D Drive)
    Now I have the following Idea and I would like to know what the experts here think of it.
    (First of course make backups copies of Music and .XML and .ITL files)
    On Old PC in iTunes,
    - Change Media Folder Location to External Drive (For example Drive Letter G:\)
    (Note: "Copy files to itunes media folder ... " is already selected)
    - Deauthorize PC
    On New PC
    - install itunes.
    - after installing shutdown iTunes
    - Copy Library (.XML & .ITL) from old pc to new pc
    - connect External Drive (Make it again Driveletter G:\)
    - startup iTunes
    - If everything is oke, Change Media Folder Location from External Drive G:\ to Local Drive D:\
    - Authorize PC
    Am I missing something ?
    Thanks in advance

    Sounds like you currently have a split library where the library files are at their default location in your user profile, but the the media folder is on an external/secondary drive. While it is possible to move such a library in the way you describe it may be easier if you can make the library "portable" before you start.
    Here are typical layouts for the iTunes folders:
    In the layout above, with the media folder (everything in the red box) inside the library folder, the library is considered to be portable. You can rearrange things to make a split library portable by taking a number of small steps which don't break the library.
    The basic non-fatal manipulations are:
    -> You can connect to an alternate set of library files by holding down Shift (Win) or Option (Mac) when starting iTunes.
    -> You can move the library files to a new location as long as the media stays out.
    -> You can move the library files and the media together if the media folder is a direct subfolder of the library folder.
    -> If you have already moved/copied the media content to a different location then you only need to copy the library files for it to appear as if you have moved the entire library in the way allowed above. I.e. just copy the library files into the parent folder of the media folder.
    -> You can rename the media folder to *iTunes Media* (if it isn't already) if the media folder is inside the library folder.
    -> iTunes uses the name of the folder holding the library files as the window title. Having made a library "portable" you may need to take a final step of renaming the library folder to iTunes or, if the library files have ended up at the root of a drive, moving all of the library files and content folders into a new folder called iTunes.
    After each change you need to open, test and close the relevant library before attempting another change. If a change broke the library, undo it or revert to using the previous set of library files.
    In essence all you need to do to make your library portable is copy the library files into the parent folder of the media folder on the external/secondary drive and use the hold-down-option-when-starting-iTunes method to connect to it. Other manipulations may be required to normalize the library so that the library and media folders have standard names.
    Once you've made you current library portable you can take a complete clone of the iTunes folder onto your external drive, and then copy that folder into your new computer. Rather than looking at this as a one time proposition you should aim to keep the copy of the library on your external drive as a backup.
    *Fast backup for iTunes library (Windows Only)*
    Grab SyncToy 2.1, a free tool from MS. This can be used to copy your entire iTunes library (& other important data folders) onto another hard drive or network share. You can then use SyncToy periodically to synchronise or echo your library to the backup. A preview will show which files need to be updated giving you a chance to spot unexpected changes and during the run only the new or updated files will be copied saving lots of time. And if your media is all organised below the main iTunes folder then you should also be able to open the backup library on any system running the same version of iTunes.
    Don't forget to deauthorise the old computer if you're not planning to use iTunes on it again.
    tt2

  • How to create pdf and html usin oracle policy automation with steps.

    Hi,
    I tried to create pdf report.But they have some problem for while creating xslt.
    Pls provide more steps or simple example creating xslt and which xml use for that also.

    Below is a working (OPA 10.1) example, which I found earlier in this forum. You can call it from a document control on the summary screen. The xml is provided by the server containing all attributes with a public name.
    <?xml version="1.0" encoding="iso-8859-1"?>
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                    xmlns:fo="http://www.w3.org/1999/XSL/Format"
                    xmlns:fn="http://www.w3.org/2005/xpath-functions">
    <xsl:output method="xml" indent="yes" encoding="utf-8"/>
    <xsl:template match="/">
    <fo:root>
      <fo:layout-master-set>
        <fo:simple-page-master master-name="my-page">
          <fo:region-body margin="1in"/>
        </fo:simple-page-master>
      </fo:layout-master-set>
      <fo:page-sequence master-reference="my-page">
        <fo:flow flow-name="xsl-region-body">
          <fo:block margin-top=".5cm" margin-left=".1cm" border-start-style="solid">
            <xsl:apply-templates mode="dump" select="/session"/>
          </fo:block>
        </fo:flow>
      </fo:page-sequence>
    </fo:root>
    </xsl:template>
    <xsl:template match="*" mode="dump" priority="100">
                <fo:block margin-top=".5cm" margin-left=".1cm" border-start-style="solid">
                    Node:
                    <xsl:value-of select="name()"/>
                    <xsl:if test="count(@*) &gt; 0">
                        <fo:block margin-left="1cm">
                            Attributes:
                            <xsl:for-each select="@*">
                                <fo:block><xsl:value-of select="name()"/>=<xsl:value-of select="."/></fo:block>
                            </xsl:for-each>
                        </fo:block>
                    </xsl:if>
                    <xsl:if test="string-length(normalize-space(text())) > 0">
                        <fo:block>Text Value: "<xsl:value-of select="text()"/>"</fo:block>
                    </xsl:if>
                    <xsl:apply-templates mode="dump" select="*"/>
                </fo:block>
        </xsl:template>
    </xsl:stylesheet>

  • Multiple source messages in transformation step in bpm

    Hello,
    I am using a transformation step in bpm, I have two source messages and 1 target message, I got an error 'expression must return the interface type XXXX(in source message)', I checked my source message in interface mapping and the container element, the types do match.
    All the interfaces in the OM as well as in the bpm container definition are of type abstract.
    It seems to have something to do with the interfaces belonging to different software components?
    I declared a usage dependency from one swcv (holding the bpm) to another (holding the abstract interfaces which cause the error in the transformation step), so that I can reference them in the bpm.
    I thought it might have something to do on how  the message interfaces are selected in the operation mapping. so I tried to select the messages through the depencency path, but that was of no help either.
    Have I been clear?
    Any ideas?
    Thanks
    Matthias

    Since no receiver information is available in the transformation step, there can be no value mapping within the transformation  step. If the messages to be transformed give values in different formats, for example different date formats, you must first normalize the values before the messages can be processed in the process. To do so, define a message mapping with a corresponding value mapping.
    Check the help page below for reference
    http://help.sap.com/saphelp_nwpi71/helpdata/en/27/db283fd0ca8443e10000000a114084/content.htm

  • Normalize sound Premiere Pro CS 4

    Hi.
    I have a lot of clips in my sequence, where the audio is too low or too high, how I normalize them at one step.?

    I'm not a heavy Sound Booth user because I come from a pro audio background and tend to lean on those tools. That said, when I need to process audio for an entire track, typically what I do is export as audio only, bring it up in my audio app (Sound Booth would probably work just fine), clean it up, save it out, and then import the edited audio track back in.
    By the way, when you're normalizing, you might first want to do a pass with a limiter to chop off the highest peaks. Since normalizing only scoots up stuff by the distance of the very highest peak to 0db (or whatever ceiling you set), one stray spike in volume can prevent the rest of the audio from being boosted.
    If you look at your wave form, you'll doubtless see that most of the energy resides within a particular range, and then you have some thin spikes here and there. Let's say that the bulk of the energy resides from -6db down. If you run the limiter to make a 6db reduction, it snips off the errant peaks. Then, when you normalize, you'll get the 6db boost across the board.
    Going too far in this direction with a limiter will start to give you some distortion, so it's best to experiment a couple of times on a copy until you've got the hang of it. The end result will be more consistent audio across the board as well as enhanced levels for you to work with once you're back in PPro.
    If you have it, Adobe Audition is a great tool for this kind of work. If not, I'm reasonably sure you can accomplish it with Sound Booth.
    Hope this helps,
    Chris

  • Steps to connect a CRM system with MDM

    Hi everybody,
    I've got a problem here : I should build a connection between a CRM system and MDM (get the data in several tables in the CRM system, normalize them in MDM and then put them back in the CRM).
    What would the steps I've got to do to achieve that ? Do I need XI between the two systems ? Do I have to map the tables before sending them in MDM (and how could I do that ?)
    Thank you for all the help you can give me
    Seb

    Hi
    The Universal Worklist (UWL) iView is part of the KMC (Knowledge Management and Collaboration), capability of SAP NetWeaver. The integration of the SAP NetWeaver MDM workflow with UWL will be provided with SP04. This will provide the ability to expose SAP NetWeaver MDM workflow steps in the SAP Universal Work List (UWL). These jobs may run from the BPM process or can be launched inform within SAP NetWeaver MDM. UWL is defined as the main consolidated ”Inbox“ for SAP users, being the main location to monitor jobs directly activated in SAP NetWeaver MDM, or connected to a process that integrates with SAP NetWeaver MDM.
    Using the SAP NetWeaver Portal UI, show a summarized list of all workflow (WF) jobs/tasks from all systems of SAP NetWeaver MDM Landscape per user - one single UI for all my tasks. The UWL is already integrated with other major SAP products, like R/3, SAP CRM, mySAP ERP, etc.
    https://sapportal.wdf.sap.corp/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/infocenters/ws%20ptg/ptg/platform%20ecosystem/industry%20standards/projects%20and%20programs/business%20process%20management/index.html
    First of all, you must set a RFC connectivity between CRM and backend system:
    o     Create system users for remote connection (RFC_CONN). This user must have SAP_ALL authorization.
    o     Create RFC connections in CRM system.
    Go to transaction SM59 on CRM side and click on 'create' button. Set RFC destination name to the back-end system <sid>CLNT<client> (eg. R3DCLNT200), connection type to 3 (abap / R/3 connection) and fill the target host to your back-end system. On the Logon/Security tab, add the RFC_CONN user created in the previous step. Save your settings.
    o     Create RFC connections in the back-end system.
    Go to transaction SM59 on the back end and click on 'create' button. Set the RFC destination name to CRM system <sid>CLNT<client> (eg. CR1CLNT300), connection type to 3 (abap / R/3 connection) and fill in the target host to your back-end system. On the Logon/Security tab, add the RFC_CONN user created in the previous step. Save your settings.
    o     Create a logical system name in CRM.
    Go to transaction SPRO -> SAP Implementation Guide -> Customer Relationship Management -> CRMServer -> Technical Basic Settings -> ALE Settings -> Distribution -> Basic Settings -> Logical Systems -> Define Logical System Add a new value:
    Save your settngs.
    o     Create a logical system name in the back end.
    Go to transaction SPRO -> SAP Customizing Implementation Guide -> SAP Web Application Server -> Application Ling Enabling -> Sending and Receiving Systems -> Logical Systems -> Define Logical System Add a new value:
    Save your settngs.
    2.     
    3.     Middleware parameters setup Go to transaction SM30 on backend system and choose table CRMCONSUM. Add the following values:
    4.     Next, choose CRMSUBTAB for subscription table for the Up and Download Object and add the following values
    User     ObjectName     U/D     Obj. Class     Function     Obj. Type     Funct. Name
    CRM     empty     Download     Material     empty     empty     CRS_MATERIAL_EXTRACT
    CRM     empty     Download     Material     empty     empty     CRS_CUSTOMIZING_EXTRACT
    CRM     empty     Download     Material     empty     empty     CRS_SERVICE_EXTRACT
    Next, choose CRMRFCPAR for definitions of RFC Connections and add the following values
    User     ObjectName     Destination     Load Type     INFO     InQueue Flag     Send XML
    CRM     *     SR1CLNT300     Initial Download     SR1CLNT300     X     Send XML
    CRM     *     SR1CLNT300     Request     SR1CLNT300     X     Send XML
    CRM     MATERIAL     SR1CLNT300     Initial Download     SR1CLNT300     X     Send XML
    Leave all other field empty.
    Now, configure filtering for the material master:
    Choose the CRMPAROLTP table for CRM OLTP Parameters and add the following values:
    Parameter name     Param. Name 2     Param. Name 3     User     Param. Value     Param. Value 2
    CRM_FILTERING_ACTIVE     MATERIAL     empty     CRM     X     empty
    Now we must edit the table for the application indicator. Go to transaction SE16N on the back-end side and choose table TBE11. Search or add an application component BC-MID and edit activity settings (field AKTIV = X).
    Save your settings.
    8.     Enterprise buyer with/without CRM In this activity, you define whether you are running the Enterprise Buyer and CRM in the same system. This might accelerate the Master Data download performance. If you are using CRM in the client, then skip this activity.
    In the CRM system run transaction BBP_PRODUCT_SETTINGS, deselect 'Test mode' and choose the Execute button.
    The system generates a report containing all tables that have been deactivated.
    9.     Generate repository objects With this procedure, you generate the middleware function modules (BDoc Object Type) for the material master.
    Go to transaction SMOGGEN and choose object PRODUCT_MAT and PRODUCT_SRV. Generate services for all object categories.
    Please let me know your email id, i shall send you this document
    Regarsd
    Rehman
    Reward Points if Useful

  • 3 weeks into learning Oracle, and I think I am ready for the next step.

    Hi all,
    I am in need of some suggestions. I have been learning oracle for almost 4 weeks now, and I need to design a database with some real world functionality. I want it to be fairly difficult so that I am able to learn things by being "thrown to the wolves" so to speak. I will be designing it from scratch basically: requirements gathering, ERD and Normalization, Table creation, and then implementing functionality in increments. I know that is a lot on my plate, but I'm sure I can do it with the amount of resources and documentation there is available.
    First things first, I will be using Oracle 10g XE as the database software. I am trying to determine what kind of database I want to design though. I really would like to stay away from the traditional/generic databases, if you know what I mean. Or if it is one of the popular starter DBs, then something with a twist maybe. Here is one of the things I have thought about, hopefully some of you guys that respond can shoot some more ideas at me:
    Corporate Inventory database for a company that does a lot of field service work with local parts, but would sometimes require parts to be shipped from other locs:
    DB would track inventory for all offices throughout company.
    Basic users could only view inventory for each office.
    Mid-level users could add/remove items at their local office, and request items to be shipped from other offices.
    Management-users could add/remove items, request items, and would be able to approve any pending inventory transfers.
    DBA - The administrator, can add new users, remove users, and all the other power that comes with being a DBA.
    Does something like this seem like a good project? I would be able to address concurrency, error handling (with codes), multiple user types, etc.
    Can somebody suggest a front-end for this?
    I was thinking to maybe make a JSP/JSF web based service (to be run locally at first) for all of the forms and such and then use the Oracle XE for the back-end. Does anybody think there is something more suitable?
    Also, if anybody can think of another DB system to design that may be better to learn from or more interesting, please suggest it.
    Best regards to all those positive community members,
    Jason

    I would like to take this project slow, I mean I do not plan on having this be a working application in 2 weeks. I would like to slowly develop it and move on to each step after the previous step is understood.
    Such as, first start by understanding what tables and keys I will be needing in order to have a database which tracks inventory. Then create users that only have certain privileges. Then start to slowly create the functions/procedures and triggers that will be used to perform certain transactions on the database, starting with the most simple ones first. Finally decide on a front-end that will be best for allowing users to perform transactions in a friendly interface.
    By no means do I think this is simple, but I do believe if I approach it correctly I can learn a lot. I was basically just hoping the experienced users in this community could help me understand the best approach and some basic ideas I may need to understand about developing a project like this.

  • Normalization on table with 1000 columns

    I am new to data modelling and need some suggestion on this
    We have 4 files coming from external vendor and responsible to create a data model. one of the file is having 1000 columns and other are like 5 and 14 columns. how do i need to approach this and create a model.. will all the column be combined together and apply normalization but the issue is it has alot of columns..
    Reply is appreciated

    user965115 wrote:
    I am new to data modelling and need some suggestion on this
    We have 4 files coming from external vendor and responsible to create a data model. one of the file is having 1000 columns and other are like 5 and 14 columns. how do i need to approach this and create a model.. will all the column be combined together and apply normalization but the issue is it has alot of columns..As already commented, we cannot provide any suggestions on a data model, except for pointing you at relational design and 3NF (3rd Normal Form).
    Technically - Oracle is not just a database. It is a data processing platform.
    This means that when you want to crunch data, it is best done inside Oracle. So you need to get the contents of the files into Oracle, first. Once in Oracle, you can process the file contents and extract the data in an intelligent way in order to populate or update the data model.
    I suggest you look at the external table feature of Oracle for loading the data. Typically in ELT processing, the loaded data will go into staging tables - tables that reflect the structure of the file. SQL is then used to transform the data in this primitive file structure, to a normalised database structure.
    The 2 SQLs that are typically used are:
    - INSERT INTO relational_table SELECT <transformation> FROM external_table
    - MERGE (allows inserts and updates of transformed data at the same time)
    For very complex transformations, PL/SQL pipelines can be considered.
    But keep in mind that this can only be as good as the design of the relational data model. Does not matter if you get the technical steps done in an optimal way - as using these steps to load data into a poorly designed relational model will still be a failure.
    Getting the data model correct in design, and in physical implementation into the database, determine success.

  • Need Help ASAP with steps in repairing file

    Im am currently recording a audio book at 48000 24bit mono (as requested by individual) and I need to do three things and I'm not sure if the order is important and if I have extra steps not needed. Three things needed repair clipping, delete noise and normalize to -0.1
    I have been running the restore lightly clipped and it seems to work fine but I notice it reduces the amplitude (no problem) I know I can normalize. I also need to extrac the noise with which I can use noise reduction, then I need to noramalize everything. Are these the proper steps or should I be doing it differently and if so how and why. I am doing this out of the country right now and internet is very slow and irregular so I patiently await any response anyone may have.
    Thank you so much....

    Thank you for your input and suggestions. The language that we are recording is one that has many sounds that will peak unless we set the gain so low there would hardly be any sound at all. We are also in a area with many outside noises such as cars, motorcycles, construction work etc etc.
    our sound booth is nothing more than
    Mattresses put together to inclose the reader and this is one of our best situations we have been able to do so far, so as you can see there are many challenges that we face with each situation and area we have to work in. Thanks for your comments and suggestions we appreciate them.
    Sent from my iPhone
    My apologies for any typos

  • Wanting to "normalize" all of my songs in my iTunes...

    I have just picked up a new mp3 player.
    Before I start to add the rest of my vast music collection, I want to "normalize" my current Library.
    I have just picked up a new WDig Ext 2TB Hard Drive, and have set my iTunes to save to and read from the Esternal Hard Drive.
    Basically I am wanting to set my entire Music Library as the same extention and bitrate so it is as "normalized" as possible, so I have consistancy once it is all on my mp3 player.
    Please Help me out - Hopefully I wont have to do anything as drastic as "Create MP3 Version" of each song.

    I too own a third party mp3 player, but I do not expect Apple to support it with iTunes.  In fact, for the Sansa Clip I do find various third party scripts that make file transferring easier.  With the Cowon you can drag compatible files from the iTunes window to the device in Finder.
    The option to convert files to a different format is there, and if you want to delete the originals and just keep the mp3 versions you can do that.  It's a pity, though, that your player apparently doesn't support the better quality AAC/mp4 which even many older players support.  The thing does play WMA which is a Windows proprietary format. It only accepts WMV, AVI, and ASF video formats which are all Windows-specific (you can get AVI to play on Macs using third party support). This thing was clearly designed to work with Windows and isn't even making a step towards playing more Mac-standard formats such as AAC and pure mp4 audio and video.  Frankly, it's in the same class as my old RCA mp3 player.  Amazon reviews don't rate it highly, and my RCA player has almost completely dropped off the radar completely (try finding it anywhwere on the web).  Pretty much the only good thing the top rated review has to say about it is you can get files on and off it without additional software.

  • SXMB_MONI only getting the Inbound Message ( CENTRAL ) not other steps

    Hi All,
    I am doing a File to proxy scenerios. I am posting the file using file content cnversion. When ever i am checking the scenerios throught tool-->Test
    Its working perfectly fine.
    But when i am sending the file and checking the SXMB_MONI.
    Its only showing the Inbound Message ( CENTRAL ) step. After that its not showing any receiver messages steps. What is the problem? Is there any services not activated? Can you help me on this?
    Thanks
    Manas

    HI Ravi,
    i have check the trace lavel this is 1. I have given the trace lavel below.
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Inbound Message
      -->
    - <SAP:Trace xmlns:SAP="http://sap.com/xi/XI/Message/30">
      <Trace level="1" type="T">Party normalization: sender</Trace>
      <Trace level="1" type="T">Sender scheme external = XIParty</Trace>
      <Trace level="1" type="T">Sender agency external = http://sap.com/xi/XI</Trace>
      <Trace level="1" type="T">Sender party external =</Trace>
      <Trace level="1" type="T">Sender party normalized =</Trace>
      <Trace level="1" type="T">Party normalization: receiver</Trace>
      <Trace level="1" type="T">Receiver scheme external =</Trace>
      <Trace level="1" type="T">Receiver agency external =</Trace>
      <Trace level="1" type="T">Receiver party external =</Trace>
      <Trace level="1" type="T">Receiver party normalized =</Trace>
      <Trace level="1" type="B" name="CL_XMS_HTTP_HANDLER-HANDLE_REQUEST" />
    - <!--  ************************************
      -->
      <Trace level="1" type="T">XMB was called with URL /sap/xi/engine?type=entry</Trace>
      <Trace level="1" type="T">COMMIT is done by XMB !</Trace>
      <Trace level="1" type="B" name="CL_XMS_MAIN-ENTER_XMS" />
    - <!--  ************************************
      -->
      <Trace level="1" type="B" name="CL_XMS_MAIN-SET_START_PIPELINE" />
    - <!--  ************************************
      -->
      <Trace level="1" type="B" name="SXMBCONF-SXMB_GET_XMB_USE" />
      <Trace level="1" type="B" name="CL_XMS_TROUBLESHOOT-ENTER_PLSRV" />
      <Trace level="1" type="T">****************************************************</Trace>
      <Trace level="1" type="T">* *</Trace>
      <Trace level="1" type="T">* *</Trace>
      <Trace level="1" type="T">XMB entry processing</Trace>
      <Trace level="1" type="T">system-ID = PID</Trace>
      <Trace level="1" type="T">client = 100</Trace>
      <Trace level="1" type="T">language = E</Trace>
      <Trace level="1" type="T">user = PIAFUSER</Trace>
      <Trace level="1" type="Timestamp">2008-07-08T11:24:07Z PST</Trace>
      <Trace level="1" type="T">* *</Trace>
      <Trace level="1" type="T">* *</Trace>
      <Trace level="1" type="T">****************************************************</Trace>
      <Trace level="1" type="B" name="CL_XMS_MAIN-CALL_UC_EXECUTE" />
    - <!--  ************************************
      -->
      <Trace level="1" type="T">Message-GUID = 04B466276FDB48713ABDB7E5A31732C5</Trace>
      <Trace level="1" type="T">PLNAME = CENTRAL</Trace>
      <Trace level="1" type="T">QOS = EO</Trace>
      <Trace level="1" type="B" name="CL_XMS_MAIN-CALL_PIPELINE_ASYNC" />
    - <!--  ************************************
      -->
      <Trace level="1" type="T">Get definition of external pipeline = CENTRAL</Trace>
      <Trace level="1" type="B" name="CL_XMS_MAIN-LOOKUP_INTERNAL_PL_ID" />
      <Trace level="1" type="T">Get definition of internal pipeline = SAP_CENTRAL</Trace>
      <Trace level="1" type="T">Queue name : XBTI0006</Trace>
      <Trace level="1" type="T">Generated prefixed queue name = XBTI0006</Trace>
      <Trace level="1" type="T">Schedule message in qRFC environment</Trace>
      <Trace level="1" type="T">Setup qRFC Scheduler OK!</Trace>
      <Trace level="1" type="T">----
    </Trace>
      <Trace level="1" type="T">Going to persist message</Trace>
      <Trace level="1" type="T">NOTE: The following trace entries are always lacking</Trace>
      <Trace level="1" type="T">- Exit WRITE_MESSAGE_TO_PERSIST</Trace>
      <Trace level="1" type="T">- Exit CALL_PIPELINE_ASYNC</Trace>
      <Trace level="1" type="T">Async barrier reached. Bye-bye !</Trace>
      <Trace level="1" type="T">----
    </Trace>
      <Trace level="1" type="B" name="CL_XMS_MAIN-WRITE_MESSAGE_TO_PERSIST" />
    - <!--  ************************************
      -->
      <Trace level="1" type="T">--start sender interface action determination</Trace>
      <Trace level="1" type="T">select interface MIOA_MT_GLEmpExp*</Trace>
      <Trace level="1" type="T">select interface namespace http://omnicell.com/FIN</Trace>
      <Trace level="1" type="T">no interface found</Trace>
      <Trace level="1" type="T">--start receiver interface action determination</Trace>
      <Trace level="1" type="T">Loop 0000000001</Trace>
      <Trace level="1" type="T">select interface *</Trace>
      <Trace level="1" type="T">select interface namespace</Trace>
      <Trace level="1" type="T">no interface found</Trace>
      <Trace level="1" type="T">--no sender or receiver interface definition found</Trace>
      <Trace level="1" type="T">Hence set action to DEL</Trace>
      </SAP:Trace>

  • Can i change the step of volume control?

    I have a problem with my ipod, when i adjust volume of different tracks. I can get preferable to my ears volume level. It sounds too loudly or too quietly. Am i able to set manually the step of the volume changing?
    I'm not interesting in the normalize function of itunes.
    Thanks.
    Message was edited by: thugtronik

    Well.. You can always right-click the song you know is too loud or too quiet (in iTunes ofc)
    and then press Show Info and under the tab alternative you get volume controls for that specific song.

  • Help, please!! -Normalize CD tracks!

    Hello!
    I have to produce a CD with 36 tracks (projects made and bounced in Logic Pro 8).
    What is the best way to ensure that all tracks are using the same volume level so that no tracks are louder and others softer in volume when listening to CD?
    Same output1-2 channel strip settings for all tracks? Same master channel volume?
    I'd hate to leave it to Toast or even Nero to 'normalize' it as I believes it kills the CD.
    Can someone help me out please?
    Thanks very much!
    Best!

    NComposer wrote:
    Hello!
    I have to produce a CD with 36 tracks (projects made and bounced in Logic Pro 8).
    What is the best way to ensure that all tracks are using the same volume level so that no tracks are louder and others softer in volume when listening to CD?
    Use WaveBurner, which you have, and load in all the tracks. Individual levels of songs can be adjusted there, as well as adding some processing, if you choose, such as compression and limiting, to alter apparent volume of your songs..
    Same output1-2 channel strip settings for all tracks? Same master channel volume?
    I'd hate to leave it to Toast or even Nero to 'normalize' it as I believes it kills the CD.
    Normalizing isn't going to help you very much.
    Normalizing just brings up the overall level so that the peaks are close to zero. It does little to adjust the perceived volume.
    Example: if one of your tracks has a maximum peak value of -8 dB, but your RMS level (close to perceived volume) is -20 dB, normalizing will bring the perceived level up to -12. Depending on your material, this may or may not work with this, or your other songs. It's the perceived volume you will want to adjust, without ruining the dynamics of your songs. Each song may need different treatment.
    Experiment with Logic's multi-band compressor and adaptive limiter, but use these with some caution. If you're talking about a CD intended for release, perhaps consider getting it mastered by a mastering engineer. But there's no harm in experimenting with the tools you have first - just don't do it as part of the mixing process - it's a separate, final, step.
    Edit: Jorge beat me to it

Maybe you are looking for

  • "Error While copying query using RSZC

    Hi Community,                       I am trying to copy queries from Info Cube to Multiprovider using RSZC, I got error called "InfoObject 0MATERIAL__0DIVISION missing from target InfoCube ZSCMIM_M1" "InfoObject 0MATERIAL__MATL_CAT missing from targe

  • Outlook 2011 mail sent to Apple mail problems

    A lot of my clients send me mail from outlook 2011 and when i receive something as simple as an image as a Signature, it is both embedded and attached, so Apple Mail displays this twice in the message body. Can anyone advise on how to stop this from

  • Thumbnails and image viewer

    I need to make a page with a bunch of thumbnails, where clicking on any of the thumb nails makes that image appear bigger in a different part of the screen. I know this can be done but I cant remember that the terminology is. Could someone tell me wh

  • Apple Wireless keyboard layout does not match

    Hello, I bought an Apple Wireless Keyboard to use with my iPad. It connects fine but the layout is not recognised. The wireless keyboard is Azerty but in the iPad when I type I get qwerty. The touchscreen keyboard of my iPad is also azerty so I did n

  • Cisco Video Camera Products

    Hi all, Can you throw some light on a requirement for a client of mine, on the isco's IP camera Products for monitoring hi Pharma Research Labs, spread over a 3000 s. mtr area. rgds, Durai, Bangalore, India