Consulta sobre bases de datos (Consultation on data bases)

Un saludo de antemano, queria consultarles si hay alguna forma de tener en una PC corriendo procesos de Lookout (Hypertrend) sin que guarde nada en la base de Citadel y en otra PC una base de datos que almacene la base de datos. La idea es tener una PC que almacene datos solamente y la otra PC grafique por medio de Hypertrend los datos que estan almacenados en la otra PC. Cualquier ayuda se las agradecere mucho.
A greeting beforehand, wanted to consult to them if there is some form to have in a PC running processes of Lookout (Hypertrend) without it keeps nothing in the base from Citadel and another PC a data base that stores the data base.  The ídea is to have a PC that only stores data and the other PC grafíque by means of Hypertrend the data that estan stored in the other PC Any aid I will thank for much.

Hola,
Lo que puedes hacer es desde una computadora estar corriendo el Hypertrend de Lookout y que este haga referencia directamente a la base de datos en la otra computadora (donde se estan almacenando los datos), para esto las computadoras tendrian que estar propiamente conectadas por medio de una red.
Saludos.

Similar Messages

  • Consult about date in FBRA

    I have a consult in transaction FBRA.
    I am going to show the problem with an example.
    Day 01.03.2012  I have a invoice document and payment document, and I decided clearing this document with F.13 transaction.
    Day 05.03.2012 I decided to reset clear these documents by FBRA transaction. I dis it without problem.
    Day 06.03.2012. I went to the FBL5N transaction and consult the clear items for 04.03.212. These documet were clear for this date, but appear like open item.
    Is there any way to reset the item from a specific date?
    Many Thanks & Best Regards
    Luciano Capua

    No.  There isn't a way to reset the item for a specific date.  Once you reset a cleared line item and make its status as open, it will appear as open even though you run the report for a date earlier than the resetting date.  This status icon (open / cleared) in FBL5N just depends on the field 'Doc. Status' in table BKPF and there is no lookup on the date before the status appears in the FBL5N report.

  • Query is running very slow when consulting history data in EBS

    SELECT *
      FROM (select NVL(TO_CHAR(aps.vendor_id), 'SIN PROVEEDOR') CODIGO,
                   NVL(aps.vendor_name, 'SIN PROVEEDOR') PROVEEDOR,
                   null IMPORTE_TRANSACCIONAL,
                   SUM(nvl(CTL.GROSS_EXTENDED_AMOUNT, CTL.EXTENDED_AMOUNT) *
                       NVL(Cta.EXCHANGE_RATE, 1)) IMPORTE_FUNCIONAL
              from ra_customer_trx_all cta,
                   ra_cust_trx_types_all ctt,
                   ra_customer_trx_lines_all ctl,
                   mtl_system_items_b inv,
                   hz_cust_accounts ca,
                   hz_parties p,
                   AR_CUSTOMERS ac,
                   hz_cust_acct_sites_all hcca,
                   hz_cust_site_uses_all hcsua,
                   RA_TERRITORIES rt,
                   Jtf_Rs_Salesreps vend,
                   gl_code_combinations glc,
                   ap_suppliers aps,
                   (select paa.item_id, paa.vendor_id
                      from PO_ASL_ATTRIBUTES paa, PO_APPROVED_SUPPLIER_LIST PASL
                     where Paa.Asl_Id = PASL.Asl_Id(+)
                       and pasl.disable_flag is null) paa2
             where CTA.cust_trx_type_id = ctt.cust_trx_type_id
               and ctl.customer_trx_id = cta.customer_trx_id
               and ctl.Inventory_Item_Id = inv.inventory_item_id(+)
               and ctt.gl_id_rec = glc.code_combination_id
               and ca.cust_account_id = cta.bill_to_customer_id
               and p.party_id = ca.party_id
               and ac.CUSTOMER_ID = cta.bill_to_customer_id
               and hcca.cust_account_id = ca.cust_account_id
               and hcca.org_id = '82'
               and hcsua.site_use_id = cta.bill_to_site_use_id
               and hcsua.cust_acct_site_id = hcca.cust_acct_site_id
               and hcsua.org_id = '82'
               and rt.territory_id(+) = hcca.territory_id
               and cta.primary_salesrep_id = vend.salesrep_id(+)
               and inv.organization_id = '84'
               and paa2.vendor_id = aps.vendor_id(+)
               and ctl.inventory_item_id = paa2.item_id(+)
               AND CTT.TYPE IN ('INV', 'CM')
               and ca.cust_account_id(+) = cta.bill_to_customer_id
               and p.party_id(+) = ca.party_id
               and ctl.Line_Type = 'LINE'
               and cta.complete_flag = 'Y'
               and cta.status_trx not in ('VD')
               and cta.legal_entity_id = '23274'
               and cta.trx_date between to_Date('01/10/2014', 'dd/MM/yyyy') AND
                   to_Date('13/10/2014', 'dd/MM/yyyy')
             group by aps.vendor_id, aps.vendor_name
            UNION
            select 'SIN ITEM' CODIGO,
                   'SIN ITEM' PROVEEDOR,
                   null IMPORTE_TRANSACCIONAL,
                   SUM(nvl(CTL.GROSS_EXTENDED_AMOUNT, CTL.EXTENDED_AMOUNT) *
                       NVL(Cta.EXCHANGE_RATE, 1)) IMPORTE_FUNCIONAL
              from ra_customer_trx_all       cta,
                   ra_cust_trx_types_all     ctt,
                   ra_customer_trx_lines_all ctl,
                   hz_cust_accounts          ca,
                   hz_parties                p,
                   AR_CUSTOMERS              ac,
                   hz_cust_acct_sites_all    hcca,
                   hz_cust_site_uses_all     hcsua,
                   RA_TERRITORIES            rt,
                   Jtf_Rs_Salesreps          vend,
                   gl_code_combinations      glc
             where CTA.cust_trx_type_id = ctt.cust_trx_type_id
               and ctl.customer_trx_id = cta.customer_trx_id
               and ctl.Inventory_Item_Id is null
               and ctt.gl_id_rec = glc.code_combination_id
               and ca.cust_account_id = cta.bill_to_customer_id
               and p.party_id = ca.party_id
               and ac.CUSTOMER_ID = cta.bill_to_customer_id
               and hcca.cust_account_id = ca.cust_account_id
               and hcca.org_id = '82'
               and hcsua.site_use_id = cta.bill_to_site_use_id
               and hcsua.cust_acct_site_id = hcca.cust_acct_site_id
               and hcsua.org_id = '82'
               and rt.territory_id(+) = hcca.territory_id
               and cta.primary_salesrep_id = vend.salesrep_id(+)
               AND CTT.TYPE IN ('INV', 'CM')
               and ca.cust_account_id(+) = cta.bill_to_customer_id
               and p.party_id(+) = ca.party_id
               and ctl.Line_Type = 'LINE'
               and cta.complete_flag = 'Y'
               and cta.status_trx not in ('VD')
               and cta.legal_entity_id = '23274'
               and cta.trx_date between to_Date('01/10/2014', 'dd/MM/yyyy') AND
                   to_Date('13/10/2014', 'dd/MM/yyyy')
             group by 'SIN ITEM', 'SIN ITEM') T_GRUPO
    order by DECODE(T_GRUPO.PROVEEDOR,
                     'SIN ITEM',
                     'A',
                     'SIN PROVEEDOR',
                     'A',
                     T_GRUPO.PROVEEDOR)

    Hi Hussein,
    APP 12.1.3
    Database 11.2.0.3.0
    SO linux x86-64
    stadistics: 0ne month ago 
    this  slowness is from three months ago
    Execution Plan
    | Id  | Operation                                                  | Name                                  | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                                 |                                          |      2 |   318 | 12680   (1)|
    |   1 |  SORT ORDER BY                                      |                                          |      2 |   318 | 12680   (1)|
    |   2 |   VIEW                                                        |                                          |      2 |   318 | 12679   (1)|
    |   3 |    SORT UNIQUE                                          |                                         |      2 |   697 | 12679  (51)|                                      
    |   4 |     UNION-ALL                                              |                                          |        |            |               |                                
    |   5 |      HASH GROUP BY                                   |                                           |      1 |   377 |  6338   (1)|                                      
    |   6 |       NESTED LOOPS OUTER                        |                                            |      1 |   377 |  6336   (1)|                                      
    |   7 |        NESTED LOOPS OUTER                       |                                           |      1 |   343 |  6336   (1)|                                      
    |   8 |         NESTED LOOPS                                 |                                              |      1 |   328 |  6335   (1)|                                      
    |   9 |          NESTED LOOPS                                |                                              |      1 |   323 |  6335   (1)|                                      
    |  10 |           NESTED LOOPS OUTER                 |                                                 |      1 |   311 |  6333   (1)|                                      
    |  11 |            NESTED LOOPS                           |                                                    |      1 |   304 |  6332   (1)|                                      
    |  12 |             NESTED LOOPS                          |                                                  |      1 |   296 |  6332   (1)|                                      
    |  13 |              NESTED LOOPS                        |                                                   |      1 |   275 |  6329   (1)|                                      
    |* 14 |               HASH JOIN                               |                                                   |      1 |   192 |  6328   (1)|                                      
    |* 15 |                HASH JOIN                    |                                                              |  6493 |   450K|  5778   (1)|                                       
    |* 16 |                 HASH JOIN                   | |                                                              6493 |   367K|  5432   (1)|                                       
    |* 17 |        TABLE ACCESS BY INDEX ROWID | RA_CUSTOMER_TRX_ALL                  |  6556 |   288K|  4635   (1)|                                       
    |* 18 |                   INDEX RANGE SCAN          | RA_CUSTOMER_TRX_N5                     | 26223 |        |    97   (2)|                                      
    |* 19 |                  TABLE ACCESS FULL          | HZ_CUST_SITE_USES_ALL                  | 40227 |   510K|   797   (3)|                                       
    |* 20 |                 TABLE ACCESS FULL           | HZ_CUST_ACCT_SITES_ALL                  | 20020 |   254K|   345   (3)|                                       
    |  21 |                TABLE ACCESS FULL            | HZ_CUST_ACCOUNTS                             | 40020 |  4728K|   549   (3)|                                       
    |* 22 |               INDEX UNIQUE SCAN             | HZ_PARTIES_U1                                       |      1|                  |     0   (0)|                                       
    |* 23 |       TABLE ACCESS BY INDEX ROWID    | RA_CUSTOMER_TRX_LINES_ALL           |     1 |    21      |     3   (0)|                                       
    |* 24 |               INDEX RANGE SCAN              | RA_CUSTOMER_TRX_LINES_N2                |      4 |            |     2   (0)|                                      
    |* 25 |             INDEX UNIQUE SCAN               | MTL_SYSTEM_ITEMS_B_U1                      |      1 |     8      |     0   (0)|                                      
    |* 26 |            INDEX RANGE SCAN                 | JTF_RS_SALESREPS_U1                           |      1 |     7      |     1   (0)|                                      
    |* 27 |           TABLE ACCESS BY INDEX ROWID       | RA_CUST_TRX_TYPES_ALL              |      1 |    12      |     2   (0)|                                      
    |* 28 |            INDEX RANGE SCAN                 | RA_CUST_TRX_TYPES_U1                           |      1 |            |     1   (0)                                       
    |* 29 |          INDEX UNIQUE SCAN                  | GL_CODE_COMBINATIONS_U1                     |      1 |     5      |     0   (0)|                                      
    |  30 |         VIEW PUSHED PREDICATE               |                                                                |      1 |    15      |     1   (0)|                                      
    |* 31 |          FILTER                             |                                                                                 |        |       |            |                                      
    |  32 |           NESTED LOOPS OUTER                  |                                                                 |      1 |    54      |     1   (0)|                                      
    |  33 |            TABLE ACCESS BY INDEX ROWID      | PO_ASL_ATTRIBUTES                          |      1 |    39      |     1   (0)|                                      
    |* 34 |             INDEX SKIP SCAN                              | PO_ASL_ATTRIBUTES_N1                    |      1 |               |     1   (0)|                                      
    |  35 |            TABLE ACCESS BY INDEX ROWID    | PO_APPROVED_SUPPLIER_LIST           |     1      |    15     |     0   (0)|                                       
    |* 36 |             INDEX UNIQUE SCAN                      | PO_APPROVED_SUPPLIER_LIST_U1        |     1      |              |     0   (0)|                                       
    |  37 |        TABLE ACCESS BY INDEX ROWID        | AP_SUPPLIERS                                       |      1      |    34       |     0   (0)|                                      
    |* 38 |         INDEX UNIQUE SCAN                         | AP_SUPPLIERS_U1                                    |      1      |              |     0   (0)|                                      
    |  39 |      SORT GROUP BY NOSORT                   |                                                               |      1 |   320           |  6341   (1)|                                      
    |  40 |       NESTED LOOPS                          |                                                                       |        |                      |            |                                      
    |  41 |        NESTED LOOPS                         |                                                                        |      1 |   320           |  6340   (1)|                                      
    |  42 |         NESTED LOOPS                        |                                                                       |      1 |        299      |  6337   (1)|                                     
    |  43 |          NESTED LOOPS OUTER                 |                                                                  |      1 |   216           |  6336   (1)|                                      
    |* 44 |           HASH JOIN                         |                                                                              |      1 |   209      |  6335   (1)|                                      
    |* 45 |            TABLE ACCESS FULL                | HZ_CUST_ACCT_SITES_ALL                        | 20020  |   254K      |   345   (3)|                                      
    |* 46 |            HASH JOIN                        |                                                                           |  5819      |  1113K     |  5989   (1)|                                      
    |* 47 |             HASH JOIN                       |                                                                            |  5875      |   430K     |      5439   (1)|                                      
    |* 48 |              HASH JOIN                      |                                                                            |  5931      |   359K     |  4641   (1)|                                      
    |  49 |               NESTED LOOPS                  |                                                                       |     38      |   646      |     6   (0)|                                     
    |* 50 |                TABLE ACCESS FULL            | RA_CUST_TRX_TYPES_ALL                        |          38 |   456      |     6   (0)|                                     
    |* 51 |                INDEX UNIQUE SCAN            | GL_CODE_COMBINATIONS_U1                       |      1      |     5      |     0   (0)|                                     
    |* 52 |       TABLE ACCESS BY INDEX ROWID   | RA_CUSTOMER_TRX_ALL                              |  6556 |   288K |  4635   (1)|                                     
    |* 53 |                INDEX RANGE SCAN             | RA_CUSTOMER_TRX_N5                               | 26223    |        |    97   (2)|                                     
    |* 54 |              TABLE ACCESS FULL              | HZ_CUST_SITE_USES_ALL                         | 40227 |   510K |   797   (3)|                                     
    |  55 |             TABLE ACCESS FULL               | HZ_CUST_ACCOUNTS                                 | 40020 |  4728K |   549   (3)|                                     
    |* 56 |           INDEX RANGE SCAN                  | JTF_RS_SALESREPS_U1                             |      1      |     7      |     1   (0)|                                     
    |* 57 |          INDEX UNIQUE SCAN                  | HZ_PARTIES_U1                                             |       1      |       |     0   (0)|                                     
    |* 58 |         INDEX RANGE SCAN                    | RA_CUSTOMER_TRX_LINES_N2                     |      4      |         |     2   (0)|                                     
    |* 59 |        TABLE ACCESS BY INDEX ROWID          | RA_CUSTOMER_TRX_LINES_ALL           |     1       |    21     |     3   (0)|                                    
    Predicate Information (identified by operation id):
      14 - access("CUST"."CUST_ACCOUNT_ID"="CTA"."BILL_TO_CUSTOMER_ID" AND
                  "HCCA"."CUST_ACCOUNT_ID"="CUST_ACCOUNT_ID")
      15 - access("HCSUA"."CUST_ACCT_SITE_ID"="HCCA"."CUST_ACCT_SITE_ID")
      16 - access("HCSUA"."SITE_USE_ID"="CTA"."BILL_TO_SITE_USE_ID")
      17 - filter("CTA"."COMPLETE_FLAG"='Y' AND "CTA"."STATUS_TRX"<>'VD' AND "CTA"."                                                                            LEGAL_ENTITY_ID"=23274)
      18 - access("CTA"."TRX_DATE">=TO_DATE(' 2014-01-01 00:00:00', 'syyyy-mm-dd hh2                                                                             
    4:mi:ss') AND
                  "CTA"."TRX_DATE"<=TO_DATE(' 2014-10-13 00:00:00', 'syyyy-mm-dd hh2                                                                             
    4:mi:ss'))
      19 - filter("HCSUA"."ORG_ID"=82)
      20 - filter("HCCA"."ORG_ID"=82)
      22 - access("PARTY_ID"="PARTY_ID")
      23 - filter("CTL"."INVENTORY_ITEM_ID" IS NOT NULL AND "CTL"."LINE_TYPE"='LINE'                                                                             
      24 - access("CTL"."CUSTOMER_TRX_ID"="CTA"."CUSTOMER_TRX_ID")
      25 - access("CTL"."INVENTORY_ITEM_ID"="INV"."INVENTORY_ITEM_ID" AND "INV"."ORG                                                                             
    ANIZATION_ID"=84)
      26 - access("CTA"."PRIMARY_SALESREP_ID"="VEND"."SALESREP_ID"(+))
      27 - filter("CTT"."GL_ID_REC" IS NOT NULL AND ("CTT"."TYPE"='CM' OR "CTT"."TYP                                                                             
    E"='INV'))
      28 - access("CTA"."CUST_TRX_TYPE_ID"="CTT"."CUST_TRX_TYPE_ID")
      29 - access("CTT"."GL_ID_REC"="GLC"."CODE_COMBINATION_ID")
      31 - filter("PASL"."DISABLE_FLAG" IS NULL)
      34 - access("PAA"."ITEM_ID"="CTL"."INVENTORY_ITEM_ID")
           filter("PAA"."ITEM_ID"="CTL"."INVENTORY_ITEM_ID")
      36 - access("PAA"."ASL_ID"="PASL"."ASL_ID"(+))
      38 - access("PAA2"."VENDOR_ID"="APS"."VENDOR_ID"(+))
      44 - access("HCCA"."CUST_ACCOUNT_ID"="CUST_ACCOUNT_ID" AND
                  "HCSUA"."CUST_ACCT_SITE_ID"="HCCA"."CUST_ACCT_SITE_ID")
      45 - filter("HCCA"."ORG_ID"=82)
      46 - access("CUST"."CUST_ACCOUNT_ID"="CTA"."BILL_TO_CUSTOMER_ID")
      47 - access("HCSUA"."SITE_USE_ID"="CTA"."BILL_TO_SITE_USE_ID")
      48 - access("CTA"."CUST_TRX_TYPE_ID"="CTT"."CUST_TRX_TYPE_ID")
      50 - filter("CTT"."GL_ID_REC" IS NOT NULL AND ("CTT"."TYPE"='CM' OR "CTT"."TYP                                                                             
    E"='INV'))
      51 - access("CTT"."GL_ID_REC"="GLC"."CODE_COMBINATION_ID")
      52 - filter("CTA"."COMPLETE_FLAG"='Y' AND "CTA"."STATUS_TRX"<>'VD' AND "CTA"."                                                                             
    LEGAL_ENTITY_ID"=23274)
      53 - access("CTA"."TRX_DATE">=TO_DATE(' 2014-01-01 00:00:00', 'syyyy-mm-dd hh2                                                                             
    4:mi:ss') AND
                  "CTA"."TRX_DATE"<=TO_DATE(' 2014-10-13 00:00:00', 'syyyy-mm-dd hh2                                                                             
    4:mi:ss'))
      54 - filter("HCSUA"."ORG_ID"=82)
      56 - access("CTA"."PRIMARY_SALESREP_ID"="VEND"."SALESREP_ID"(+))
      57 - access("PARTY_ID"="PARTY_ID")
      58 - access("CTL"."CUSTOMER_TRX_ID"="CTA"."CUSTOMER_TRX_ID")
      59 - filter("CTL"."INVENTORY_ITEM_ID" IS NULL AND "CTL"."LINE_TYPE"='LINE')
    Note
       - 'PLAN_TABLE' is old version
    Statistics
             15  recursive calls
              0  db block gets
       62543652  consistent gets
         269141  physical reads
            172  redo size
          12832  bytes sent via SQL*Net to client
            674  bytes received via SQL*Net from client
             16  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
            212  rows processed

  • Informatica PowerCenter consultation

    Prelogic Solutions Ltd
    Prelogic Solutions is one of Canada's providers of advanced
    information technology, products, services and business consulting
    expertise
    Data Integration, Informatica PowerCenter consultation, Enterprise
    business solutions, Dynamic infrastructure
    We provides on site or remote consultation Data integration , Data
    warehouse and BI service with affordable rate
    our data integration infrastructure is based on Informatica
    PowerCenter Cloud Edition
    We are able to develop end-to-end enterpise data integration system or
    any individal pheses
    such as Initiation, analysis, design , implementation, QA,
    deployment ,maintenance.
    [email protected]
    Canada-Toronto

    Try to change of browser (from IE to firefox for instance) ?
    And double check that the user has the good privileges ie the right to read the nls parameter.
    If you have identified the statement that hangs, test if with SQL Plus to see if it comes from the database
    of from Informatica.
    And if you don't find, made a call (Service Request) to the support. http://support.oracle.com
    Success
    Nico

  • Data Migration from CRM 5.0 to CRM 7.0

    Hi Friends,
    We are into a re-implementation on CRM 7.0. The old system was on CRM 5.0.
    Since this is not an upgrade, the data will not copied over from the old system into the new system.
    So, now I would like to know how to migrate data (master/transaction) from CRM 5.0 into 7.0.
    I have read that we can make use of BAPI/LSMW/IDOC/XIF adapters to push info into CRM.
    But, the customer wants all the historical data to be in the new system.
    I think master data can be handled....  but how to migrate the transaction data?.... i mean the number ranges/document flow.. and other relationships.. how can we maintain the same links into the new system?
    If it was a migration from legacy system into the SAP CRM, we could have gone ahead with new number ranges etc... but this migration is from SAP to SAP.
    Also, we will have ECC6.0 connected to CRM 7.0. ANy additional things to be taken care of when the connection is live in DEV/QA/PROd instances?
    Any pointers to this ?

    Hi Gary,
    As per my understanding, your data migration involves quite complex scenarios covering the CRM and ERP box. As the customer needs the old data to be present in the new implementation, the old data needs to be migrated from the CRM 5.0 to CRM 7.0. For the migration to happen, you need to setup the customization (like the number ranges,Condition Data Tables, etc ). Then only you can proceed with the data migration. Also, you need a proper backup plan as this involves proper backup of the data.
    Once the setup is complete, you need to configure the ECC6.0 with the CRM7.0 system. Here also, you need to do a lot of settings and performing the Initial loading of the Adapter Objects. But before you proceed with the entire operation, the entire system landscape must be properly defined with appropriate settings.
    My recomendation is to take help of SAP Consultants and data migration experts as they are the best persons to guide you on this. Also it would be great if you can go through the SAP Best Practices documents which are best repository documents to guide you on this.
    Have a look on this link for an overview of the CRM Data Migration
    http://help.sap.com/saphelp_crm60/helpdata/en/1a/023d63b8387c4a8dfea6592f3a23a7/frameset.htm
    Hope this helps.
    Thanks,
    Samantak.

  • BCS Documentation

    Hi BCS Experts,
    I am BW consultant with accounting background and would like to learn BCS.Could you please send me some good BCS documentation except SAP material. I completed the SAP BCS course excercises. I dont have the confidence to start working in BCS yet. Could you please also tell me that whether do we load in BCS Trial balance or Financial statements via flexible upload?
    Thanks and regards,
    Harry

    Hi Harry,
    1. The file format completely depends on the flexible upload method settings. As an exampe you can look at the structure of the fields and the file structure here:
    /people/sap.user72/blog/2006/07/21/how-to-upload-hierarchy-and-master-data-of-fs-items-in-sap-sem-bcs-40-by-using-flexible-upload-method
    For totals upload you just need to add:
    - item
    - amount in local currency
    - amount in transaction currency
    - transaction currency
    - movement type (optional)
    - company
    - business partner
    maybe some more analytical characteristics.
    2. As I wrote before (in another thread), I didn't see any good step-by-step docs related to BCS customizing. There are a couple of files that somehow explain this process (but not very well). So, I may send them to you provided that I have your mail ID.
    3. There are some notes related to overview, data basis and master data:
    773178 - Overview of consulting notes in SEM
    630474 - Consulting note - Data entry in SEM-BCS
    638477 - Data basis-Data Streams - characteristics fixed in consArea
    676337 - FinBasis - Leading fiscal year variant
    682481 - FinBasis - Customizing leading fiscal year variant
    727776 - Requirements of SEM-BCS for data model
    772743 - Data basis - Notes on the configuration
    779307 - Importance of fiscal year variants
    831324 - Data basis - Inconsistent data model
    883282 - Data basis - Various problems during creation
    889795 Dump GETWA_NOT_ASSIGNED in READ_VALUES_FROM_DATA_BASE (SP10)
    539647 - Synchronization of hierarchies with BW does not work
    578348 - FIN master data - Synchronization local system with BW
    689229 - FinBasis - Reports for the manual synchronization
    381626 - DataSources of consolidation for transaction data
    736226 - SEM-BCS 400 - Activation of the SAP delivery example
    859893 - SEM-BCS 60- Activating the SAP delivery example
    741004 - SEM-BCS - Activating delivery performance reference example
    Best regards,
    Eugene

  • Role in Implemetation?

    Hi Experts,
    i faced many interviews with repeatedly interviewrs asked me that what is the role in ur project. please guide me in steps wise answer.
    thanks in advance
    kiran

    SAP BW Data Extraction Consultant: The BW Data Extraction Consultant is responsible to identify and obtain the data required to satisfy the requirements of the BW project. This data may include:
    SAP R/3 data
    New Dimension product data
    Data external to SAP within the organization (legacy data)
    Data external to SAP from outside the organization (provider data – D&B, Nielson)
    The BW Data Extraction Consultant role has a broad range of responsibilities and may require multiple individuals to satisfy the role depending on the scope of the BW project and the complexity and quality of the data.
    If SAP R/3 and New Dimension data only is required to satisfy requirements and if this data is included in the standard Business Content of BW, this role may be combined with the BW Application Consultant role. This standard Business Content allows for extraction of R/3 and New Dimension data in a straightforward and rapid manner.
    If non-SAP data is required, if standard Business Content must be enhanced significantly, if BAPI interfaces are being used, and/or if the data quality from the source system is insufficient, this role can be quite complex and can required significant resources. This complexity and quality of data is a primary contributor to the size and scope of the BW project.
    If legacy data is being extracted a close relationship is required with the legacy extraction expert. In some cases, the legacy extraction expert may assume this responsibility.
    Specifically, the BW Data Extraction Consultant is responsible for:
    Designing the data solution to satisfy defined business requirements
    Identifying the data in the source environment
    Mapping the data to the BW environment
    Identifying data quality gaps
    Developing a plan to close data quality gaps
    Developing the required extraction programs, if necessary
    Developing the associated interface programs, if necessary
    Testing of all developed programs
    Ensuring integration testing of data from various sources
    Developing a production support plan
    SAP BW Data Access Consultant: The BW Data Access Consultant is responsible to assess the business requirements, and design and develop a data access solution for the BW project. This solution may include use of:
    BW’s Business Explorer
    Non-SAP Data Access tools (e.g., Business Objects, Cognos, Crystal Reports, and other certified data access tools)
    Visual Basic development
    Web development
    WAP (wireless) development
    R/3 drill-through
    The BW Data Access Consultant role has a broad range of responsibilities and may require multiple individuals to satisfy the role depending on the scope of the BW project and the requirements associated with data access.
    The BW Data Access Consultant should work closely with the individuals responsible for business requirements gathering and analysis and have a thorough understanding of the way the data will be used to make business decisions.
    Often significant change management issues are generated as a result of modifications required by end users to the data access design and implementation. As a result the BW Data Access Consultant is in a key position to provide valuable information to the change agent or change management process.
    Specifically, the BW Data Access Consultant is responsible for designing the data access solution to include:
    Understanding the data that will be available in BW in business terms
    Identifying the way end users want to analyze the data in BW
    Designing the data access solution to satisfy defined business requirements
    The BW Data Access Consultant is also responsible for developing the data access solution to include:
    Developing options for data access (i.e. web solution, R/3 drill through, ODS reporting, master data reporting, 3rd party tools)
    Developing prototypes of data access for review with end users
    Developing the required data access solutions
    Developing the associated interface programs and/or customized web enhancements, if necessary
    Configuring the Reporting Agent, if necessary
    Configuring the GIS
    Testing of all developed solutions
    Ensuring integration testing of data access solution
    Developing a production support plan
    Working with training development to include data access solution in BW course materials
    SAP BW Data Architect: The BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the:
    BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates)
    BW ODS Objects
    BW Datamarts
    Logical Models
    BW Process Models
    BW Enterprise Models
    The BW Data Architect plays a critical role in the BW project and is the link between the end user’s business requirements and the data architecture solution that will satisfy these requirements. All other activities in the BW project are contingent upon the data design being sound and flexible enough to satisfy evolving business requirements.
    The BW Data Architect is responsible for capturing the business requirements for the BW project. This effort includes:
    Planning the business requirements gathering sessions and process
    Coordinating all business requirements gathering efforts with the BW Project Manager
    Facilitating the business requirements gathering sessions
    Capturing the information and producing the deliverables from the business requirements gathering sessions
    Understanding and documenting business definitions of data
    Developing the data model
    Ensuring integration of data from both SAP and non-SAP sources
    Fielding questions concerning the data content, definition and structure
    This role should also address other critical data design issues such as:
    Granularity of data and the potential for multiple levels of granularity
    Use of degenerate dimensions
    InfoCube partitioning
    Need for aggregation at multiple levels
    Need for storing derived BW data
    Ensuring overall integrity of all BW Models
    Providing Data Administration development standards for business requirements analysis and BW enterprise modeling
    Provide strategic planning for data management
    Impact analysis of data change requirements
    As stated above, the BW Data Architect is responsible for the overall data design of the BW project. This includes the design of the:
    BW InfoCubes (Basic Cubes, Multi-cubes, Remote cubes, and Aggregates)
    BW ODS Objects
    BW Datamarts
    Logical Models
    BW Process Models
    BW Enterprise Models
    SAP BW Application Consultant: The BW Application Consultant is responsible for utilizing BW to satisfy the business requirements identified for the project. As provided in the other roles, if the scope of the BW project is tightly controlled and can use standard BW Business Content, InfoCubes, and Queries, the BW Application Consultant may assume the responsibility to perform several roles concurrently to include:
    BW Data Architect
    BW Data Access Consultant
    BW Data Extraction Consultant
    SAP Project Manager
    Business Process Team Lead
    Authorization Administrator
    If this occurs, the BW Application Consultant must have a broad range of skills and this position will be under significant pressure during the course of the BW project. In this situation, the BW Application Consultant inherently must be responsible for the overall integrated design and realization of the BW solution.
    If the project scope is broad and must extend Business Content, InfoCubes and/or Queries, then the project warrants resources being assigned to the roles identified above. In this case, the BW Application Consultant is responsible for the overall integrated design and coordinated realization of the BW solution.
    If this role is assumed by an SAP Consultant, often the expectations are that they are familiar with all components and functionality of Business Information Warehouse. This role often naturally becomes a focal point for all design consideration related to BW.
    The BW Application Consultant (or one of the resources identified above) uses the BW Administrator Workbench to perform the functions provided by BW:
    Establish connections to the BW sources
    Activate the standard Business Content
    Enable the standard InfoCubes and Queries
    Enhance the InfoCubes as required by the BW Data Architect
    Enhance the Queries as required by the BW Data Access Consultant
    Define authorization profiles and access
    Evaluate statistical performance and make recommendations to Basis support for optimization where possible
    Manage the CTS layer
    SAP BW Basis Consultant: The BW Basis Person must be able to advise on BW Landscape issues, Transport environment, Authorisation, Performance Issues of Database and BW, Installation of BW Server, Plug Ins and Frontend (For all layers there are patches / support packages) that should be regularly installed.
    This role can be assumed by the Basis Consultant (However, additional BW skills are absolutely necessary)
    Hope it helps..

  • Problems in the creation of a MOLAP Cube with DBMS_AWM.

    I want to create a MOLAP Cube with the package DBMS_AWM. So I create the ROLAP Cube and the Dimensions with the Enterprise Manager Website, and all works perfectly. Then I executed the code to create the multidimensional dimensions and the multidimensional cube (awm dimensions/cubes), but I had some problems in the first dimension.
    This dimension has the name D_DESTIN, and has the following hierarchy:
    +DESTIN_DESC_H3
    |
    +---DESTIN_DESC_H2
    |
    +------DESTIN_DESC_H1
    |
    +---------DESTIN_DESC_H0
    The name of the hierarchy is H_D_DESTIN.
    The following code is the code that I used to create the first multidimensional dimension:
    set serveroutput on
    execute cwm2_olap_manager.set_echo_on;
    execute dbms_aw.execute ('aw create ''WTESTE''');
    execute dbms_awm.create_awdimension('EXEMPLO_OLAP', 'D_DESTIN', 'EXEMPLO_OLAP','WTESTE', 'WD_DESTIN');
    execute dbms_awm.create_awdimload_spec('D_DESTIN_LOAD', 'EXEMPLO_OLAP', 'D_DESTIN', 'FULL_LOAD');
    execute DBMS_AWM.SET_AWDIMLOAD_SPEC_PARAMETER ('D_DESTIN_LOAD','EXEMPLO_OLAP','D_DESTIN','UNIQUE_RDBMS_KEY','NO');
    execute dbms_awm.refresh_awdimension('EXEMPLO_OLAP', 'WTESTE', 'WD_DESTIN', 'D_DESTIN_LOAD');
    commit;
    execute cwm2_olap_manager.set_echo_off;
    execute cwm2_olap_manager.end_log
    When I executed the code above, I got the following error:
    PL/SQL procedure successfully completed.
    SP2-0103: Nothing in SQL buffer to run.
    PL/SQL procedure successfully completed.
    AMD-00001 created AWDimension "EXEMPLO_OLAP.WTESTE.WD_DESTIN"
    PL/SQL procedure successfully completed.
    AMD-00001 created AWDimLoad_Spec "D_DESTIN_LOAD.EXEMPLO_OLAP.D_DESTIN"
    PL/SQL procedure successfully completed.
    AMD-00002 set AWDimLoad_Spec_Parameter "D_DESTIN_LOAD.EXEMPLO_OLAP.D_DESTIN"
    UNIQUE_RDBMS_KEY to "NO"
    PL/SQL procedure successfully completed.
    ERROR Create_AWDimension. Problem refreshing dimension:
    WD_DESTIN
    Error
    Validating Dimension Mappings WD_DESTIN.DIMENSION. Key Expression
    DWH.D_DESTIN.DESTIN_KEY for Mapping Group
    WD_DESTIN.H_D_DESTIN.DESTIN_DESC_H0.DWH_D_DESTIN_WD_DESTIN_H_D_DESTIN
    DESTINDESC_H0.DIMENSIONMAPGROUP, Level WD_DESTIN.DESTIN_DESC_H0.LEVEL,
    Hierarchy WD_DESTIN.H_D_DESTIN.HIERARCHY is Incorrectly Mapped to
    RDBMS.
    (AW$XML) AW$XML
    In SYS.AWXML!__XML_HANDLE_ERROR PROGRAM:
    BEGIN dbms_awm.refresh_awdimension('EXEMPLO_OLAP', 'WTESTE',
    'WD_DESTIN', 'D_DESTIN_LOAD'); END;
    ERROR at line 1:
    ORA-06510: PL/SQL: unhandled user-defined exception
    ORA-06512: at "OLAPSYS.DBMS_AWM", line 1012
    ORA-06512: at line 1
    Commit complete.
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    I don’t know what is wrong. The ROLAP Cube is valid according to Oracle Enterprise Manager Website, and it is possible to consult its data with "OracleBI Spreadsheet Add-In"
    What is wrong?
    Regards,
    Rui Torres

    I executed the same code in a different user and the MOLAP Cube was created successfully.
    But, I don’t know what is the privilege/role, that permits this second user to create a MOLAP cube with the package DBMS_AWM.
    The privileges/roles of the first user are:
    ROLES
    ======
    CONNECT
    OLAP_DBA
    OLAP_USER
    OWBR_EXEMPLO_OLAP
    OWB_EXEMPLO_OLAP
    SYSTEM PRIVILEGES
    =================
    ALTER SESSION     
    CREATE ANY PROCEDURE     
    CREATE DATABASE LINK     
    CREATE DIMENSION     
    CREATE INDEXTYPE     
    CREATE MATERIALIZED VIEW     
    CREATE PROCEDURE     
    CREATE PUBLIC DATABASE LINK     
    CREATE PUBLIC SYNONYM     
    CREATE ROLE     
    CREATE SEQUENCE     
    CREATE SESSION     
    CREATE SYNONYM     
    CREATE TABLE     
    CREATE TRIGGER     
    CREATE TYPE     
    CREATE VIEW     
    DROP ANY PROCEDURE     
    DROP PUBLIC SYNONYM     
    EXECUTE ANY PROCEDURE     
    GLOBAL QUERY REWRITE     
    SELECT ANY TABLE     
    SYSDBA     
    UNLIMITED TABLESPACE
    OBJECTS PRIVILEGES
    ==================
    Object      Privilege     |Schema |Object     
    =======================================================
    SELECT               |SYS     |DBA_ROLE_PRIVS     
    EXECUTE               |SYS     |DBMS_LOCK     
    SELECT               |SYS     |DBMS_LOCK_ALLOCATED     
    EXECUTE               |SYS     |DBMS_OBFUSCATION_TOOLKIT     
    EXECUTE               |SYS     |DBMS_SNAPSHOT     
    SELECT               |SYS     |V_$LOCK     
    SELECT               |SYS     |V_$MYSTAT     
    SELECT               |SYS     |V_$SESSION     
    SELECT               |SYS     |V_$SYSTEM_PARAMETER
    The privileges/roles of the second user are:
    ROLES
    ======
    AQ_ADMINISTRATOR_ROLE          
    DBA          
    MGMT_USER
    SYSTEM PRIVILEGES
    =================
    CREATE MATERIALIZED VIEW     
    CREATE TABLE     
    GLOBAL QUERY REWRITE     
    SELECT ANY TABLE     
    UNLIMITED TABLESPACE
    OBJECTS PRIVILEGES
    ==================
    Object Privilege     |Schema     |Object     
    =============================================
    EXECUTE               |SYS     |DBMS_ALERT     
    EXECUTE               |SYS     |DBMS_AQ     
    EXECUTE               |SYS     |DBMS_AQADM     
    EXECUTE               |SYS     |DBMS_AQELM     
    EXECUTE               |SYS     |DBMS_AQ_IMPORT_INTERNAL     
    EXECUTE               |SYS     |DBMS_DEFER_IMPORT_INTERNAL     
    EXECUTE               |SYS     |DBMS_REPCAT     
    EXECUTE               |SYS     |DBMS_RULE_EXIMP     
    EXECUTE               |SYS     |DBMS_SYS_ERROR     
    EXECUTE               |SYS     |DBMS_TRANSFORM_EXIMP     
    ALTER               |SYS     |INCEXP     
    DEBUG               |SYS     |INCEXP     
    DELETE               |SYS     |INCEXP     
    FLASHBACK          |SYS     |INCEXP     
    INDEX               |SYS     |INCEXP     
    INSERT               |SYS     |INCEXP     
    ON COMMIT REFRESH     |SYS     |INCEXP     
    QUERY REWRITE          |SYS     |INCEXP     
    REFERENCES          |SYS     |INCEXP     
    SELECT               |SYS     |INCEXP     
    UPDATE               |SYS     |INCEXP     
    ALTER               |SYS     |INCFIL     
    DEBUG               |SYS     |INCFIL     
    DELETE               |SYS     |INCFIL     
    FLASHBACK          |SYS     |INCFIL
    Which privilege/role permits the second user to create a MOLAP cube?
    Regards,
    Rui Torres

  • External hard drive went read only

    I have a Seagate FreeAgent GoFlex Desk external hard drive, which i've loaded movies, pictures, videos, and documents on to. I have only used up about 100GB of 1.5TB and for some reason when I plugged it in today it wont let me load anything onto it and if you click get info on it, it says you can only read. Any idea what changed or how to fix this?

    BookX wrote:
    all Seagate hdd are NTFS formatted by default from manufacturer.
    I suggest verifying your information prior to posting. This drive
    <http://www.seagate.com/www/en-us/products/external/external-hard-drive/mac-deskt op-hard-drive>
    is a Seagate drive, and it is not formatted as NTFS. If you consult its data sheet, you'll find it's formatted as HFS+, and compatibility with Windows is provided by a suitable driver, available for download from Seagate.
    If you need further information about Seagate drives, I suggest consulting Seagate directly, or, at least, reading some of the data sheets and tech papers provided by Seagate free of charge.

  • Mapping of Web App context root and the physical directory of the web app

    I'm running Weblogic 7.0 on Windows2000.The physical directory of my web application
    is D:\WL8\weblogic81\TestDeploy\build\TestWebApp and under these directory I have
    my JSPS, static HTML and WEB-INF. I define the context path of this web app in
    the weblogic.xml ;-
    <weblogic-web-app>
         <context-root>/testapp</context-root>
    </weblogic-web-app>
    As a result of deploying this web app in the server (or it may be created manually
    also), the following entry gets inserted in the server's config.xml ,-
    <Application Deployed="true" Name="TestWebApp"
    Path="D:\WL8\weblogic81\TestDeploy\build" TwoPhase="true">
    <WebAppComponent Name="TestWebApp" Targets="myserver" URI="TestWebApp"/>
    </Application>
    Now, whenever I make a request of the form "http://localhost:7001/testapp/..",
    it's properly executing my web app. My question is, how does the container knows
    that for any request for the web app with context path as 'testapp', it has to
    server files from D:\WL8\weblogic81\TestDeploy\build\TestWebApp. In the above
    process, nowhere such mapping is specified. I expected something like Tomcat's
    server.xml, where in docbase we clearly specify this mapping between the context
    path and the physical directory. Please help.

    Let me give some more details and hopefully this will make things clearer.
    Say you deploy /foo/bar/myweb.war and in myweb.war you configure a
    context-root of /rob
    During deployment, the server creates an ApplicationMBean with a path of
    /foo/bar/. It then creates a WebAppComponent with a uri of myweb.war.
    Next, deployment calls back on the web container and tells it to deploy
    the WebAppComponent. The web container reads the myweb.war, parses
    descriptors etc. The web container then updates its data structures to
    register that myweb.war has a context path of /rob. (It has to figure
    out all the other servlet mappings as well.)
    When a request for /rob/foo comes in, the web container consults its
    data structures to determine which webapp and servlet receives the
    request. This is not a linear search of all webapps and servlets.
    There's much better ways to do pattern matching.
    Hope this clears things up. Let me know if you still have questions.
    -- Rob
    Arindam Chandra wrote:
    Thanks for the answer. Still one thing is not clear. Whatever context path I declare
    for my web app as the value of <context-root> element in the weblogic.xml (in
    my example it's "/testapp"), it is no where mapped with the "URI" attribute (or
    any other attribute, sub-element whatsoever in the <Application> element).
    Application Deployed="true" Name="TestWebApp"
    Path="D:\WL8\weblogic81\TestDeploy\build" TwoPhase="true">
    <WebAppComponent Name="TestWebApp" Targets="myserver" URI="TestWebApp"/>
    </Application>
    So when a request of the form http://myweblogic.com:7001/testapp/... arrives at
    the server, how does the server knows that it has to serve this request with files
    from D:\WL8\weblogic81\TestDeploy\build\TestWebApp ? It should not be like the
    web container iterates thru all the web application entries in config.xml and
    tries to match with one context-root declaration. I repeat, I expected some mapping
    similar to Tomcat's server.xml, where in the <docbase> element u clearly specify
    the mapping between the context path and the physical directory
    Rob Woollen <[email protected]> wrote:
    Arindam Chandra wrote:
    I'm running Weblogic 7.0 on Windows2000.The physical directory of myweb application
    is D:\WL8\weblogic81\TestDeploy\build\TestWebApp and under these directoryI have
    my JSPS, static HTML and WEB-INF. I define the context path of thisweb app in
    the weblogic.xml ;-
    <weblogic-web-app>
         <context-root>/testapp</context-root>
    </weblogic-web-app>
    As a result of deploying this web app in the server (or it may be createdmanually
    also), the following entry gets inserted in the server's config.xml,-
    <>So the server will look for your web application at the Application Path
    (D:\WL8\weblogic81\TestDeploy\build|) + the web uri (TestWebApp). So
    it
    maps the context-root you've specified /testapp to that path.
    It's a little clearer in the case where you had a full-fledged EAR.
    Then you'r application path would map to the "root" of the EAR, and the
    uris would point to the various modules (eg webapps.)
    -- Rob
    Now, whenever I make a request of the form "http://localhost:7001/testapp/..",
    it's properly executing my web app. My question is, how does the containerknows
    that for any request for the web app with context path as 'testapp',it has to
    server files from D:\WL8\weblogic81\TestDeploy\build\TestWebApp. Inthe above
    process, nowhere such mapping is specified. I expected something likeTomcat's
    server.xml, where in docbase we clearly specify this mapping betweenthe context
    path and the physical directory. Please help.

  • Audio/Video Issues in Tour de LC

    First off, I would like thank Adobe for producing this resource.
    In looking through the various pages contained within Tour de LC, I have found a couple of minor issues with regard to some of the videos presented. A co-worker had found a video that had no audio, unfortunately I am not sure which module it occurred. There is some kind of captioning in the video so it may just be that there is no audio.
    In LiveCycle Overview > LiveCycle Solutions > Life Sciences > Life Sciences Demo Module, the Life Science Demo video does not sync with the audio. The video seems to stop while the audio continues.
    In the LiveCycle Overview > Resources > Adobe TV section, the Videos labeled 'Service-Orientated Architecture for the Enterprise' and 'Invoking a LiveCycle ES process from an Adobe AIR application' seem to be the same video.
    Again, Thank you for this great resource. I am sure it will be helpful to a great many LC users and developers.
    Vive La Digital Revolution!!
    Ryan D.E. Garner
    Systems Consultant
    Garner Data Systems, Inc.

    Hi Ryan,
    Thanks for the input and compliments on Tour De LiveCycle.
    Many of the videos in TDL do NOT have audio associated with them.  They simply have callouts on the screen directing the user.
    On the "LiveCycle Overview > LiveCycle Solutions > Life Sciences > Life Sciences Demo"    I do not see the same behavior as you do...it seems like it is ok.  Maybe it has to do with the network speed.
    We will fix the link on the Adobe TV to point to the correct video.
    Thanks for the input!
    -Jeff Kalicki
    Avoka Technologies

  • Reading Characteristics of a DMS Document if the one does not exist yet

    Hi,
    Whenever a document is created with classification from transaction CV01N, an entries is being made into the tables of DB for the entered classification of the document. In this case I can get the characteristic values of document classification from within the BAdI method (for example DOCUMENT_MAIN01~BEFORE_SAVE) by selecting from DB table.
    Is there any way for read the document classification from within method DOCUMENT_MAIN01~BEFORE_SAVE before the actual save of document (corresponding entries does not exist in DB table)?

    Nemesis wrote:
    The original table t_data contains data which is normally deleted after 45 days, but it also stores data which is used for demonstration purposes by consultants, and which should not be deleted with the rest.
    Truncating would delete all the data in the table t_data, but the problem is that a subset of the data must be retained in the table. This means that either I adapt the purge code on this table not to delete the required data, or I displace the data that must be saved until the purge is done and then reinsert it into t_data...Choice 1:
    Use partitioned tables. Store the "temporary" data in partitions that can be deleted and the consultants demo data in another partition that doesn't get truncated.
    Choice 2:
    Have two similarly structured tables that store the two types of data so that one table can be truncated as required and the other maintains the data. Use a view on top of the tables to union the data to appear as one "table" for use in the application.
    As already mentioned by others, creating tables at run time is just wrong. It's poor design, it's not scalable and it leads to dynamically generated queries which can inherently have bugs that will not be apparent until run-time and sometimes only under certain conditions, thus leaving your code very liable to break and very difficult to debug.
    Global Temporary tables are what are generally used for temporary storage of data within a transaction or a session if they suit, but it sounds as if you need the data retention across sessions with truncate option available as required on partial data. That is one of the purposes of partitions on tables... or of course go for the two table and view option if you haven't paid for partitioning in your licence.
    Edited by: BluShadow on Dec 22, 2009 8:33 AM
    LOL! Billy posted whilst I was typing. p.s. Glad to see you got your Ace badge Billy. Well deserved.

  • Reporting Services authorisation, authentification

    I would like to use Reporting Services in our business application. I'm new with it and I'm not sure can we do it or not. We are using SQL Server authentification and our stored procedure retrives different information for different users (users have
    pseudopriveleges on tables rows), so our report server should go the same way. I've read some information about custom authentification and want to ask - If I login to SSRS with custom username and password how this credentials could
    be used in Report DataSources? What type of authentification should I select in Data Source? Does SSRS automaticaly send user credentials to datasources?
    Thanks for reading :-)

    I see.
    Those constraints were not clear from your original question.
    Thank you for clarifying.
    A bit more clarification is required, though:
    Do you plan to use SSRS embedded in your application (by using the Reporting Services API)?
    If so, you might find it easier to simply deliver an additional parameter to your reports which would serve as a sort of "User Token" (like "UserID" for example), so that your reporting queries would know which user is querying them;
    and the "User Token" will be delivered by your application code - hidden away from the user. Based on this "User Token", you might want to use
    impersonation for your existing procedures to work properly.
    Please note that you'd might want to use SSL in order to encrypt your traffic over the network in such a case.
    Also note that I'm only suggesting these convoluted suggestions because you mentioned that it must be SQL Authentication. If you'd accept Windows Authentication, you'd be able to use the built-in Windows NT authentication mechanism which would be much easier
    for you to implement in your reports.
    Eitan Blumin; SQL Server Consultant - Madeira Data Solutions;

  • [DW MX 2004] insertar fecha

    Hola
    Hay alguna manera de insertar la fecha en una base de datos
    mysql (bien
    en campo date o bien timedate) desde dreamweaver sin hacer
    muchas
    virguerias con el codigo?
    Con un campo oculto de formulario mete la fecha en cero todo,
    retocando
    a mano el value (sustituyendo el VALUE date por NOW() me da
    error
    Alguna idea o tutorial donde poder insertar la hora-fecha en
    la base de
    datos?
    Gracias
    Aqui pongo el codigo que me genera dreamweaver de uno de los
    ejemplos
    que he intentado hacer
    Saludos
    <?php
    if (!function_exists("GetSQLValueString")) {
    function GetSQLValueString($theValue, $theType,
    $theDefinedValue = "",
    $theNotDefinedValue = "")
    $theValue = get_magic_quotes_gpc() ? stripslashes($theValue)
    : $theValue;
    $theValue = function_exists("mysql_real_escape_string") ?
    mysql_real_escape_string($theValue) :
    mysql_escape_string($theValue);
    switch ($theType) {
    case "text":
    $theValue = ($theValue != "") ? "'" . $theValue . "'" :
    "NULL";
    break;
    case "long":
    case "int":
    $theValue = ($theValue != "") ? intval($theValue) : "NULL";
    break;
    case "double":
    $theValue = ($theValue != "") ? "'" . doubleval($theValue) .
    : "NULL";
    break;
    case "date":
    $theValue = ($theValue != "") ? "'" . $theValue . "'" :
    "NULL";
    break;
    case "defined":
    $theValue = ($theValue != "") ? $theDefinedValue :
    $theNotDefinedValue;
    break;
    return $theValue;
    $editFormAction = $_SERVER['PHP_SELF'];
    if (isset($_SERVER['QUERY_STRING'])) {
    $editFormAction .= "?" .
    htmlentities($_SERVER['QUERY_STRING']);
    if ((isset($_POST["MM_insert"])) &&
    ($_POST["MM_insert"] == "form1")) {
    $insertSQL = sprintf("INSERT INTO noticias (autor, titulo,
    categoria,
    fecha, noticia) VALUES (%s, %s, %s, %s, %s)",
    GetSQLValueString($_POST['autor'], "text"),
    GetSQLValueString($_POST['titulo'], "text"),
    GetSQLValueString($_POST['categoria'], "text"),
    GetSQLValueString($_POST['fecha'], "date"),
    GetSQLValueString($_POST['noticia'], "text"));
    mysql_select_db($database_noticias, $noticias);
    $Result1 = mysql_query($insertSQL, $noticias) or
    die(mysql_error());
    $insertGoTo = "index.php";
    if (isset($_SERVER['QUERY_STRING'])) {
    $insertGoTo .= (strpos($insertGoTo, '?')) ? "&" : "?";
    $insertGoTo .= $_SERVER['QUERY_STRING'];
    header(sprintf("Location: %s", $insertGoTo));
    mysql_select_db($database_noticias, $noticias);
    $query_Recordset1 = "SELECT * FROM noticias ORDER BY
    id_noticia DESC";
    $Recordset1 = mysql_query($query_Recordset1, $noticias) or
    die(mysql_error());
    $row_Recordset1 = mysql_fetch_assoc($Recordset1);
    $totalRows_Recordset1 = mysql_num_rows($Recordset1);
    ?>

    This is a multi-part message in MIME format.
    ------=_NextPart_000_0058_01C687F0.03709B10
    Content-Type: text/plain;
    charset="iso-8859-1"
    Content-Transfer-Encoding: quoted-printable
    A la hora meter la fecha en una base de datos desde PHP se
    pueden =
    presentar muchos problemas si no se tienen las ideas claras.
    Por eso =
    siempre recomiendo meterla en un campo del tipo int(11) el
    timestamp =
    UNIX actual, esto es, los segundos pasados desde el 1 de
    enero de 1970. =
    =BFPor qu=E9 usamos esto y no los campos espec=EDficos para
    fechas que =
    tiene MySQL?... muy sencillo, por comodidad.
    Es muy sencillo darle formato a un timestamp con la funci=F3n
    date(), =
    que a un campo recuperado de la base de datos guardado como
    DATE =
    (AAA-MM-DD) o como TIMESTAMP (desde AAAMMDDHHMMSS a solo AA
    de acuerdo =
    al valor que se le aplique al campo al crearlo). Por ejemplo,
    TIMESTAMP =
    (12) --> AAMMDDHHMMSS, mientras que con TIMESTAMP (6)
    --> AAMMDD.
    Como puedes ver es un poco lioso trabajar con los campos de
    fechas de =
    MySQL, por eso es preferible trabajar con un INT(11) que
    guarde =
    directamente el valor proporcionado por time().
    Como ventaja adicional a la hora de trabajar con el timestamp
    UNIX de =
    una fecha es que es muy sencillo hacer c=E1lculo entre
    fechas. Por =
    ejemplo, si quieres saber cuantos d=EDas hay entre dos fechas
    concretas =
    es tan sencillo como restar los timestamp y el resultado
    dividirlo por =
    86400 (segundos que tiene un d=EDa).
    Otra funci=F3n muy interesante de usar el algunos c=E1lculos
    con fechas =
    o validaciones de =E9stas es mktime(). Yo suelo usarla para
    pasar las =
    fechas recogidas en un formulario (02/06/2006) al timestamp
    UNIX antes =
    de guardarlo en una base de datos. Algo que te puede ayudar
    es:
    $datos_fecha=3D explode('/', $_POST['fecha']);
    $dia=3D$datos_fecha[0];
    $mes=3D$datos_fecha[1];
    $ano=3D$datos_fecha[2];
    $timestamp =3D mktime(0,0,0,$mes,$dia,$ano);
    Con =E9sto pasamos de una fecha recogida en un formulario a
    trav=E9s de =
    POST con el formato DD/MM/AAAA a su correspondiente timestamp
    UNIX que =
    almacenamos en la variable $timestamp para poder usarla
    posteriormente a =
    la hora de guardar la fecha en la base de datos.
    Saludos,
    Julio Barroso
    ------=_NextPart_000_0058_01C687F0.03709B10
    Content-Type: text/html;
    charset="iso-8859-1"
    Content-Transfer-Encoding: quoted-printable
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0
    Transitional//EN">
    <HTML><HEAD>
    <META http-equiv=3DContent-Type content=3D"text/html; =
    charset=3Diso-8859-1">
    <META content=3D"MSHTML 6.00.2900.2180"
    name=3DGENERATOR>
    <STYLE></STYLE>
    </HEAD>
    <BODY>
    <DIV><FONT face=3DArial size=3D2>A la hora meter
    la fecha en una base de =
    datos desde=20
    PHP se pueden presentar muchos problemas si no se tienen las
    ideas =
    claras. Por=20
    eso siempre recomiendo meterla en un campo del
    tipo int(11) el =
    timestamp=20
    UNIX actual, esto es, los segundos pasados desde el 1 de
    enero de 1970. =
    =BFPor qu=E9=20
    usamos esto y no los campos espec=EDficos para fechas que
    tiene =
    MySQL?... muy=20
    sencillo, por comodidad.</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Es muy sencillo
    darle formato a un =
    timestamp con la=20
    funci=F3n date(), que a un campo recuperado de la base de
    datos guardado =
    como DATE=20
    (AAA-MM-DD) o como TIMESTAMP (desde AAAMMDDHHMMSS a solo AA
    de acuerdo =
    al valor=20
    que se le aplique al campo al crearlo). Por ejemplo,
    TIMESTAMP (12) =
    --&gt;=20
    AAMMDDHHMMSS, mientras que con TIMESTAMP (6) --&gt;
    AAMMDD.</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Como puedes ver
    es un poco lioso =
    trabajar con los=20
    campos de fechas de MySQL, por eso es preferible trabajar con
    un INT(11) =
    que=20
    guarde directamente el valor proporcionado por
    time().</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Como ventaja
    adicional a la hora de =
    trabajar con el=20
    timestamp UNIX de una fecha es que es muy sencillo hacer
    c=E1lculo entre =
    fechas.=20
    Por ejemplo, si quieres saber cuantos d=EDas hay entre dos
    fechas =
    concretas es tan=20
    sencillo como restar los timestamp y el resultado dividirlo
    por 86400 =
    (segundos=20
    que tiene un d=EDa).</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Otra funci=F3n
    muy interesante de usar =
    el algunos=20
    c=E1lculos con fechas o validaciones de =E9stas es mktime().
    Yo suelo =
    usarla para=20
    pasar las fechas recogidas en un formulario (02/06/2006) al
    timestamp =
    UNIX antes=20
    de guardarlo en una base de datos. Algo que te puede ayudar =
    es:</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial color=3D#0000ff
    size=3D2>$datos_fecha=3D =
    explode('/',=20
    $_POST['fecha']);<BR>$dia=3D$datos_fecha[0];<BR>$mes=3D$datos_fecha[1];<B=
    R>$ano=3D$datos_fecha[2];<BR>$timestamp=20
    =3D mktime(0,0,0,$mes,$dia,$ano);</FONT></DIV>
    <DIV><FONT face=3DArial color=3D#0000ff
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Con =E9sto
    pasamos de una fecha =
    recogida en un=20
    formulario a trav=E9s de POST con el formato DD/MM/AAAA a su
    =
    correspondiente=20
    timestamp UNIX que almacenamos en la variable $timestamp para
    poder =
    usarla=20
    posteriormente a la hora de guardar la fecha en la base de =
    datos.</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial
    size=3D2>Saludos,</FONT></DIV>
    <DIV><FONT face=3DArial
    size=3D2></FONT> </DIV>
    <DIV><FONT face=3DArial size=3D2>Julio =
    Barroso</FONT></DIV></BODY></HTML>
    ------=_NextPart_000_0058_01C687F0.03709B10--

  • 'Embed system time as timecode' not accurate?

    It appears that the "system time" timecodes are not updated on every frame, even when 'Embed system time as timecode' is checked, and 'Frame Interval' is set to 1.
    The timecodes routinely latch onto a certain value, so that several timecodes in a row will report the same system time.
    To study this behavior, I recorded a video of a computer screen displaying the current time as an epoch timestamp in milliseconds, and, from the recorded .flv file, read back the timecodes from each video frame as well as the timecodes embedded as amf0 'onFI' events as recorded by FMLE.
    You may consult the data I collected below:
    A graph showing embedded timecodes above video time:
    http://adamflorin.com/xfer/adobe/timecodes-ahead-of-video.pdf
    The complete data:
    http://adamflorin.com/xfer/adobe/fmle-timecode-bug.xls
    The FMLE-created video file used:
    http://adamflorin.com/xfer/adobe/fmle-output.flv
    This data was collected on a 2008 MacBook.
    Has anyone else seen behavior like this? I look forward to doing more tests, on different machines and with different settings, but because they're so laborious to compute, I thought I'd reach out first.
    Unfortunately this is quite urgent, so I'm afraid I'll have to investigate other technologies. Any help is greatly appreciated, THANKS!

    For what it's worth, frame rate appears to be a factor. I see this issue consistently at 30fps, but never at 24fps or 25fps, where the amf0 timecodes are fairly evenly spaced, at about the duration of one frame.
    I'm using VP6 at 640x360, a Logitech C910, Mac OS 10.7.3, FMLE 3.2.0.99.32.
    This behavior seems pathological to me. Has anyone else seen this?

Maybe you are looking for

  • Assign a Javascript variable value to a ABAP variable

    Hi,    I wish to assign a javascript variable value to  a ABAP variable. Any ideas how to do that?     Many thanks. Rgds, Bernice

  • Doc. Curr change in MIRO - Urgent

    Hi all of you We have 3 vendors one is for Material , Freight and Customs house in  one P.O.. We have configured to create 3 invoices at a time with ref to P.O (GR based Invoice with respect to PO) My client is 100% EOU so that most of the cases will

  • Problems with Zip Classes

    I am having a hard time zipping up an EJB. I would like to zip the META-INF folder and it's contents along with the package structure and files in the package. Now that you know the end result, let me ask the question. How does one use the ZipOutputS

  • Converting PDF to Illustrator

    I've got a PDF that seems to open OK in Illustrator, it was originally produced by a CAD program: http://www.bavariayacht.info/downloads/Schematic%20Panel%2020.pdf However, none of the text is rendered as text, instead each character is made up of a

  • Running Unity with both Exchange 2003 and 2010?

    My company is planning to upgrade from Exchange 2003 to 2010.  We are currently using Unity 7.0.  We'd like to run the two in parrallel for a bit and take our time switching users over. Now I've read that Unity can support Exchange 2010, but I haven'