Some questions on Information Broadcasting

Hi,
We're implementing Information Broadcasting. I have some questions and would be delighted if you could answer some or all of them!!
1. When using Web-templates I get the following message after opening my zip-file (Independent HTML-file) thru e-mail:
- Bex Broadcaster does not support Precalculation. What's the meaning of this? We get precalculate figures.
- The broadcaster demands that Javascript can be executed in the browser. How to solve?
2. When I broadcast a query as zip-file HTML with seperate MIME files then the precalculated views do not have drilldown functionality, i.e. the navigation block functionality does not work (static). I put navigation on Distribution channel. Is this normal way of working?
3. Broadcast functionality does not give possibility for Distribution type: 'Export to Enterprise Portal'.
We have an Enterprise Portal. Is additional customizing required?
4. Process chains have the Event Data Change process. This only works if I attach it directly to a cube. When I connect it to the process load data (i.e. to an Infopackage) it does not work. Any ideas?
Thanks in advance!!
Kind regards, Harjan

Hi Harjan,
  Not sure what the issue is with regard to (1).
  With regard to (2), you need to to click on the 'Filter Navigation' tab in the BEx Broadcaster, enavle filtering, and select the characteristics you wish to filter on.
  With regard to (3), yes, there's additional configuration required in order to be able to broadcast into the Enterprise portal. Here's the documentation: http://help.sap.com/saphelp_nw04/helpdata/en/84/30984076b84063e10000000a1550b0/content.htm
  With regard to (4), the idea behind integrating the broadcasting event into a process chain is to only perform the brodcast if the info provider was loaded successfully to ensure the end user has accurate and up to date information. http://help.sap.com/saphelp_nw04/helpdata/en/ec/0d0e405c538f5ce10000000a155106/content.htm
Regards,
Oliver

Similar Messages

  • Scheduling question for information broadcasting

    I am confusing between 2 options In information broadcast.
    1) executation at pre-defined time
    2) direct scheduling at background processing
    I have no idea what is the different of each.
    I read the sap help but it still does not make clear to me

    HI,
    In BI 7 we have various options..
    If you have Java Stack in BI server then it will use Portal and then there you can configure the settings, you need to give one technical name and then schedule the same.
    We faced the problem, in my system there is no Java Stack so we used BW 3.5 WAD in BI 7 system and then in WAD we configured the Broadcasting settings and saved with one Technical name then used RSRD_BROADCAST_STARTER program and create Varient for "Broadcasting settings and saved with one Technical name " and in PC after Cube load we given this Program with Varianet , so once dataloads will happen the reports will distributes.
    Thanks
    Reddy

  • Questions on Bex Broadcaster

    Hello All,
    I have some questions regarding Bex Broadcaster. Can anyone please answer.
    Is there a way to monitor Bex broadcaster to get informed in case that reports could not be sent off? If yes how?
    Could this be automated (e.g. that we get an email if there was any issue)?
    If Bex broadcaster is down while it should send out some reports, will the reports then be distributed later on when Bex broadcaster is up and running again?

    Srikanth,
    As part of your support you could check transactions SOST and SCOT daily.  This will tell you if the braodcaster has not sent an email out to a certain recipient.
    Im not sure if some how if you can put the process of sending emails in bex broadcaster in a process chain.  Then you can have a step which will send you a mail if the process fails.
    Thanks,
    Nick.

  • Information Broadcaster - textvariable

    Hi All,
    one more question regarding information broadcasting.
    i want to use a own text variable as a text for an e-mail. I created an text variable in query designer and thought this would fix the problem. But, the text variable is not available. Is there any other way to create a text variable or is it not possible to use a own text variable.
    Regards
    Juergen

    OJ,
    In the examples mentioned above, i was referring to Information broadcasting. I was hoping that you could use text variables as mentioned in help.sap
    http://help.sap.com/saphelp_nw04/helpdata/en/20/0dfa40fe14f523e10000000a155106/frameset.htm
    Texts
    Here you can make the following entries:
    ·        Subject: Enter a subject line for the e-mail manually or use a text variable from This graphic is explained in the accompanying text Attach Text Variable. The text can be up to 50 characters long.
    ·        Importance: Select the desired importance of the e-mail (low, medium, high).
    ·        Contents: Enter text for the contents of the e-mail manually or use a text variable from This graphic is explained in the accompanying text Attach Text Variable. When sending online links, the system automatically adds a link to the text of the e-mail if it is not part of the text via the variable PR_ONLINE_LINK.
    Can someone share some information.
    Raj.

  • Information broadcasting report output

    Hi,
    We are receiving some reports via Information broadcaster. But in those reports when we are opening we can not see navigational block and pre selection criteria.
    Is there any way we can achieve this?
    Any urgent help will be appreciated.
    Thanks and regards
    Kiran

    If you try the option Online link to current data then you can get the display similar to the query output where you can see the Pre selection criteria and the navigational block
    Prathish

  • Information Broadcasting on Portal and KM

    Hello all,
    We are about to implement a BI-Portal integration for information broadcasting soon. At present we are planning to integrate BI and it's BI runtime portal for information broadcasting - and our BI Portal does not have KM or TREX yet.
    I would just like to confirm one issue. I've read materials but I'm still not 100% sure on this issue: does Information Broadcasting require the KM installation together with the EP?
    It says on the FAQ:
    In addition, using Broadcasting not only using e-mail but also with the SAP NetWeaver Portal requires the additional installation of a J2EE Server with SAP NetWeaver Portal or KM.
    What I understand, do correct me if I'm wrong, is that when you publish into the Portal - it will reside on the PCD or in WebDAV (KM).
    Question: is it mandatory to install KM for this scenario? Or can information broadcasting live by without KM?
    I believe there are benefits on having KM - like subscriptions or have it accessible via BEx Portfolio under the business explorer role in the portal.. but I'm quite confused as to requiring to have KM installed.
    Thanks in advance all.

    Thanks for the prompt response.
    It does appear that KM is required.. but some articles I've read did not point out it be mandatory. So overall, this one's kind of confusing -- well at least for me having no experience on BI-Portal integration.
    But the SAP Note 830385 - [SAP Note 830385|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/oss_notes/sdn_oss_bw_bex/~form/handler%7b5f4150503d3030323030363832353030303030303031393732265f4556454e543d444953504c4159265f4e4e554d3d383330333835%7d]though it's based on SAP BW 3.5 and SAP Enterprise Portal 6.0 - it stated that:
    "Knowledge Management and Collaboration must also be available on the portal because the required BW Java classes are also installed during this installation."
    Anyways, thanks again. And I will leave this thread open running and hopefully someone who has implemented Information Broadcasting can shed more light on this topic.
    -Jan

  • Some questions on SAP DMS

    Hello
    some questions on SAP DMS:-
    - can versions of same document be shown graphically. ?
    - can links be created for documents. can we make sure that links are not broken when document is moved ?(links to document on 1st location shall not be broken)
    - is there a notification mechanism(user can recieve e-mail when document gets changed)
    - is there a review mechanism, before doc. is archived.(sort life cycle)
    Thanks

    Hi Shovon,
    1. can versions of same document be shown graphically?
    You can display all existing versions of a document info record by using the menu 'Extras' >> 'Versions'. Then you will see all existing document version for this specific document number. Further you will see which is the current version at the moment.
    2. can links be created for documents. can we make sure that links are not broken when document is moved ?(links to document on 1st location shall not be broken)
    I'm not quite sure what you mean by links to documents. If you mean linking document info records together, I can confirm that this works also in DMS. You can maintain the object DRAW for object links in the customizing and then you will be able to link one document info record to another. The data for all document info records is stored in table DRAW.
    If you are talking about originals which are linked to document info records, please note that if you check in an original the system creates a unique LOIO- and PHIO-ID which is stored in the Content Server and in the database tables DMS_DOC2LOIO and DMS_PHIO2FILE. But there are also different reports to migrate original data from one content Server to another (e.g. DMS_RELOCATE_CONTENT).
    3. Is there a notification mechanism(user can recieve e-mail when document gets changed)
    Yes you can activate an event type linkage for object
    DRAW     CHANGED     DOKST
    DRAW     CHANGED     DWNAM
    in transaction SWE2. Here you just have to set the flag for 'Type linkage' (linkage activated). Then everytime the document is changed a notification is sent to the entered USER of this document info records.
    4. Is there a review mechanism, before doc. is archived.(sort life cycle)
    Normally there is no special lifecycle. You can control this by maintaining a released status and if there are more versions always version with the release status is the current one. But there is no specific date when the document info record expires or something like that.
    Best regards,
    Christoph
    P.S.: Please reward points for useful information.

  • Some questions on versioning and synchronizing metadata

    Hy all!
    I am quite new to warehousing and Oracle Warehouse Builder, and so i would have some questions regarding on some common issues. I would appriciate if you guys would who have experience in this domain to share some good practice knowledge :)
    I am using OWB 10.2
    So first of all i would like to know if you have some proposal of the way of versioning control and synchronizing projects between team memebers when working on a bigger project, team memebers that don't work on the same repository (cause i saw that OWB has an integrated multiuser support for handeling object locks and user sessions).
    I saw that a way of migrating data from one place to a nother is using the import/export options integrated in OWB. This creates mdl files wich are some kind of "dumps" of the metadata informations, but the thing with these mdl files wich i don't think is a good way to synchronize is that first of all the .mdx and .xml files contained in the .mdl (wich is kind of a zip) contains many informations in it (like creation date, some timestamps, etc) wich are always updated when exporting, and if synchronizing these files maybee using CVS, we always will get differences between the files alltough they would contain the same thing, only timestamps changed.
    Then a nother issue with this, is that we could have 2 alternatives: dump the whole project, wich is odd to have to synchronize a single file between users, especialy on a big project, then the orher way would be doing for each object from the project (each mapping, each table, etc) an separate .mdl filem then to synchronize each file of each object, wich will be unefficient on reimporting each file in part.
    So please if you can share the way you work on a big project with many implementers with OWB, i would really appriciate.
    A nother thing i would like to know is: is there a way to generate from an existing project (like one created with OWB) the OMB commands dump (maybee in a tcl script)? Cause i saw that the way the exeprienced users implement warehousing is using TCL with OMB language. I downloaded the example from oracle for warehouse project, and i saw that is entirely made from tcl scripts (so no mdl file involved). And this i think would be nice, to have the OMB commands generated from an existing projects.
    I see this OWB projects like a database wich can be built up from only OMB commands and OWB a graphical tool to do this (same as constructing a database only from DDL commands or using SQL developer to do this), this is why i am asking about a way of dumping the OMB commands for creating an OWB project.
    Please give me some advices, and correct me if i sad some dumb things :D but i really am new to warehousing and i would really appriciate if you guys with experience could share some informations.
    Thank you verry much!
    Alex21

    Depends. Having everyone working on the same project certainly simplifies things a lot regarding merging and is generally my preference. But I also recognize that some projects are complex enough that people wind up stepping on each other's toes if this is the case. In those cases, though, I try to minimize the issue of merging changes by having common structural objects (code libraries, tables, views, etc) retained in a single, strictly controlled, central project schema and having the developer's personal work areas reference them by synonym, thus being unable to alter them to the detriment of others.
    If they want to change a common object then need to drop their synonym and make a local copy which they can alter, and then there is a managed process by which these get merged back into the main project schema.
    This way any changes MUST go through a central schema, we can put processes in place to notify all of the team of any impending changes, and can also script updates across the team.
    Every hour a script runs automatically that checks for dropped synonyms and notifies the project leader. It especially checks for two developers who have built local copies of the same object and notifies each that they need to coordinate with each other as they are risking a conflict. When a structural change is submitted back to the central shared schema, it is added to a batch that is installed at end of business and a list of those impending changes is circulated to the team along with impact analysis for dependencies. The install script updates the main schema, then also drops the local copy of the object in the developer's schema who made the change and re-establishes the synonym there to get back to status quo for the change monitoring. Finally, it then updates itself in all of the developer areas via OMBPlus. So, each morning the developers return to an updated and synched environment as far as the underlying structure.
    This takes care of merging structural issues, and the management of the team should minimize other metadata merging by managing the worklist of who is to be working on a given mapping or process flow at a given time. Anyone found to be doing extraneous changes to a mapping or process flow when it is not in their job queue without getting pre-approval will be spoken to VERY firmly as this is counter to policy. And yes, OWB objects such as mappings are then also coordinated to the central project via import/export. OMBplus scripts also propogate these changes daily across the team as well.
    Yep, there is a whole lot of scripting involved to get set up.... but it saves a ton of time merging things and solvinv conflicts down the road.
    Cheers,
    Mike

  • Some questions regarding ESB system.

    Hi all,
    I've used my ESB system for a few months now, so I thought it would be interesting to look at what's going on in my database schema created by my esb system (oraesb). This led to some questions (and raising eyebrows), I hope some of you soa-experts might have an answer.
    * Is my system installed properly:
    I noticed that the oraesb schema created by running IRCA.zip installs only tables, views, topic queues and 1 procedure (create_queue). However, looking at the sql scripts in ${soaSuite_home}/integration/esb/sql/oracle there are far more stored procedures defined. Is it normal cq. okay that these objects are not installed or is my esb system faulty?
    * No constraints or indices:
    My system is yet very small, so it is still performing good/fast. I imagine that when the esb system is going further, performance and locking becomes an issue due to lack of indices on foreign key columns and primary/foreign constraints.
    * Only small part of schema used:
    When browsing through the tables of the oraesb user I notice that only a few tables are filled with data. For example all "Slide Tables" (this is the name given to these group tables in file ${soaSuite_home}/integration/esb/sql/oracle/wfeventc.sql) are empty. Is this normal? What kind of processes should enter data in these tables? What is the use of the "Slide tables"?
    * AQ-tables not being used:
    My esb system has five aq-queue tables (esb topics), but they are never used! I recall another thread on this forum about these queue tables growing enormous in size. I guess there must be a sort of switch somewhere to switch between jms-queue's and aq-queue's? Can anyone please explain how switch on the aq-queue's or point me to the proper documentation. I must have overlooked this in the documentation.
    Kind regards,
    Happy new year,
    H

    When your podcast is accepted you should receive an email telling you this and giving the URL for its page in the iTunes Store. The string of numbers at the end is the ID number.
    It usually takes somewhat longer for a new podcast to appear in the search results. Once you can find it by searching on the title, you can get the Store page URL, if you still don't know it, by control-clicking on the podcast image (or where it should be) and choosing 'Copy Podcast URL'.
    You may find this page helpful in giving you basic information about podcasting:
    http://www.wilmut.org.uk/pc

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Error when executing a Query through information Broadcasting

    Dear SDN,
    We have configured information Broadcasting in the Web Reports---
    Scheduled and then when executing the following error is coming
    500 internal server error - Microsoft internet explorer
    Error when processing your request
    What has happened?
    The URL http://xyz.com:8000/sap/bw/BEx was not called due to an error.
    Note
    The following error text was processed in the system BWD : Please enter a valid value for characteristic 0CALMONTH.
    The error occurred on the application server asalbwd_BWD_00 and in the work process 0 .
    The termination type was: ERROR_MESSAGE_STATE
    The ABAP call stack was:
    START-OF-SELECTION of program RSRD_BROADCAST_PROCESSOR
    What can I do?
    If the termination type was RABAX_STATE, then you can find more information on the cause of the termination in the system BWD in transaction ST22.
    If the termination type was ABORT_MESSAGE_STATE, then you can find more information on the cause of the termination on the application server asalbwd_BWD_00 in transaction SM21.
    If the termination type was ERROR_MESSAGE_STATE, then you can search for more information in the trace file for the work process 0 in transaction ST11 on the application server asalbwd_BWD_00 . In some situations, you may also need to analyze the trace files of other work processes.
    If you do not yet have a user ID, contact your system administrator.
    Error code: ICF-IE-http-c:900-u:VENKAT -l:E-s:BWD-i:asalbwd_BWD_00 -w:0-d:20080708-t:133744-v: ERROR_MESSAGE_STATE-e:Please enter a valid value for characteristic 0CALMONTH.
    HTTP 500 - Internal Server Error
    Your SAP Internet Communication Framework Team
    Please suggest us to resolve this error..
    Help will be greatly appreciated with points..
    Thanks in advance

    Hi
    Thanks...
    I have given only Fiscal Year/Period Variable
    Other than Fiscal Year/Period Variable....No Variables in that Query..
    In FI Queries, I did not use 0CALMONTH Variable...
    I do not know why it is giving that error..
    Please suggest us to resolve this error..
    Thanks in advance

  • Issue with Information broadcasting and pre-cal server in SAP BI

    Hi Experts,
    I have issue with information Broadcasting and Pre-cal server. I have a worrkbook whcih runs from information broadcasting in pre-cal server. Workbook is desinged on single query and workbook contains visual basic code. workbook has a variable which is controlled by Control query in information broadcasting.
    I have 20 employees every month I need to send workbook via e-mail from information broadcasting.  The problem is sometimes all 20 employees will recieve e-mails and sometimes not. I identified there is something wrong in workbook or in pre-cal server.
    because after workbook calculation in pre-cal server is not pushing to SOST so there is something wrong in workbook or in pre-cal server.
    I closely observed pre-cal server front end log. There are few operations performing on workbook in pre-cal server
    like Open workbook, Calculate workbook, Save woorkbook and close workbook. In success case pre-cal sever is performing all the operations but in failure case pre-cal server is missing Close workbook case.
    Below log In pulled from pre-cal server
    Can you please tell what could be the problem.
    Successful job
    3/11/2011 9:29:46 AM (3) -> RS_PREC_LAUNCH_EXCEL i nvoked in thread 3 and job 'BIBCAST4L4EIS7TOR5BWG0
    2PCUG9ZPPP'. nvoked in thread 3 and job 'BIBCAST4L4EIS7TOR5BWG0
    3/11/2011 9:29:46 AM (3) -> InitConnection in thre ad 3
    3/11/2011 9:29:46 AM (3) -> Trying to open "C:\Pro gram Files\Common Files\SAP Shared\BW\BExAnalyzer.
    xla" gram Files\Common Files\SAP Shared\BW\BExAnalyzer.
    3/11/2011 9:30:18 AM (4) -> RS_PREC_GET_SERVER_STA TUS invoked.
    3/11/2011 9:30:18 AM (4) -> RS_PREC_GET_SERVER_STA TUS finished
    3/11/2011 9:30:35 AM (3) -> Using Version 7100.4.1 200.35 of BExAnalyzer.xla
    3/11/2011 9:30:35 AM (3) -> PID of Excel process: "2504"
    3/11/2011 9:30:35 AM (3) -> EndOfInitConnection in  thread 3
    3/11/2011 9:30:35 AM (3) -> RS_PREC_LAUNCH_EXCEL f inished
    3/11/2011 9:30:36 AM (0) -> Calculation Request 91 1A35ED029D4D79DD7A000200000000 received for job 'B
    IBCAST4L4EIS7TOR5BWG02PCUG9ZPPP'. 1A35ED029D4D79DD7A000200000000 received for job 'B
    3/11/2011 9:30:52 AM (6) -> RS_PREC_GET_SERVER_STA TUS invoked.
    3/11/2011 9:30:52 AM (6) -> RS_PREC_GET_SERVER_STA TUS finished
    3/11/2011 9:31:22 AM (0) -> Opening workbook: C:\W INDOWS\TEMP\BW\Analyzer\Workbooks\SAPBEXPRECMML4NN
    DVLYAPC2SYIC7F1AAIO_0.xls INDOWS\TEMP\BW\Analyzer\Workbooks\SAPBEXPRECMML4NN
    3/11/2011 9:31:43 AM (0) -> Refresh BExAnalyzer.xl a!MenuRefreshPrecalc returned with 1.
    3/11/2011 9:31:43 AM (0) -> Calculated workbook C: \WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECMML
    4NNDVLYAPC2SYIC7F1AAIO_0.xls saved. \WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECMML
    3/11/2011 9:31:43 AM (0) -> Calculated workbook C: \WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECMML
    4NNDVLYAPC2SYIC7F1AAIO_0.xls closed. \WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECMML
    3/11/2011 9:31:44 AM (0) -> Excel based operations  finished.
    Failure job
    3/10/2011 10:22:58 AM (1) -> RS_PREC_LAUNCH_EXCEL invoked in thread 1 and job 'BIBCAST4L41566ZBZDN2N
    TGJR0462ZFX'. invoked in thread 1 and job 'BIBCAST4L41566ZBZDN2N
    3/10/2011 10:22:58 AM (1) -> InitConnection in thr ead 1
    3/10/2011 10:22:58 AM (1) -> Trying to open "C:\Pr ogram Files\Common Files\SAP Shared\BW\BExAnalyzer
    .xla" ogram Files\Common Files\SAP Shared\BW\BExAnalyzer
    3/10/2011 10:23:20 AM (3) -> RS_PREC_GET_SERVER_ST ATUS invoked.
    3/10/2011 10:23:20 AM (3) -> RS_PREC_GET_SERVER_ST ATUS finished
    3/10/2011 10:23:44 AM (1) -> Using Version 7100.4. 1200.35 of BExAnalyzer.xla
    3/10/2011 10:23:44 AM (1) -> PID of Excel process:  "2544"
    3/10/2011 10:23:44 AM (1) -> EndOfInitConnection i n thread 1
    3/10/2011 10:23:44 AM (1) -> RS_PREC_LAUNCH_EXCEL finished
    3/10/2011 10:23:44 AM (5) -> Calculation Request 9 11A35ED02654D789871000900000000 received for job '
    BIBCAST4L41566ZBZDN2NTGJR0462ZFX'. 11A35ED02654D789871000900000000 received for job '
    3/10/2011 10:24:27 AM (0) -> RS_PREC_GET_SERVER_ST ATUS invoked.
    3/10/2011 10:24:27 AM (0) -> RS_PREC_GET_SERVER_ST ATUS finished
    3/10/2011 10:24:31 AM (5) -> Opening workbook: C:\ WINDOWS\TEMP\BW\Analyzer\Workbooks\SAPBEXPRECQB2VE
    A8O8D8FBHYCR4HVB5UI8_0.xls WINDOWS\TEMP\BW\Analyzer\Workbooks\SAPBEXPRECQB2VE
    3/10/2011 10:24:52 AM (5) -> Refresh BExAnalyzer.x la!MenuRefreshPrecalc returned with 1.
    3/10/2011 10:24:52 AM (5) -> Calculated workbook C :\WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECQB
    2VEA8O8D8FBHYCR4HVB5UI8_0.xls saved. :\WINDOWS\TEMP\BW\Analyzer\Workbooks
    SAPBEXPRECQB
    Thanks in advance
    Narendra

    Hi Ravikanth,
    Thank you very much for the reply
    I went into RSPRECADMIN and clicked on 'Display Current Queue'.  There we have 3 sections
    1) Queue Overview of Open Precalculations
    I can see this section is always blank after running IB also
    2) Queue Overview of Current Precalculations
    In this section I can see an entry after running IB and duration is changing. but some time workbook in this section will never processed but the duration column is changing may be something is happening at this stage.
    If the entry is not proccessed in this section and IB SM37 job will never end. I manually cancelled the job from SM50.
    Don't why the workbook is taking long time and will never end even though I cancelled the SM37 IB job.
    3) Queue Overview of Proccessed Error-Free Precalculations
    All the error free workbooks which means all pre-cal server completed workbook calculation and sent to SOST.
    I manually ran the workbook for all the 20 employees I can't find any pop-up message (earlier we have pop-up windows appearing because of VB code and when report returns no data and we fixed the pop-up issues)
    Can you please help me further to trace the error.
    Thank you
    Narendra

  • I'm going to install Arch this weekend but I have some questions

    Currently I use Linux Mint on my primary PC, but I've installed Arch on my older PC at my parents house.  I like it a lot and I think I want a distro that is rolling release and also that I build myself (as opposed to installing all of the bloat on Mint).  However, I do have some questions; Linux is pretty new to me (been using it since about November) so I don't know exactly how everything works.
    1)  How do I know what packages I have to have installed for building C/C++ programs?  I know in Ubuntu install gcc/g++ by itself doesn't get the job done, you need build-essential to get all of the libraries.  Also, what packages do I need for OpenGL and SDL?  I'm a CS major and I'm in an OpenGL class so I need to be able to compile OpenGL programs; SDL is for my own purposes, but I'd still like to have it.
    2)  I use Amarok to transfer songs to my iPod.  Amarok 1.48 and libgpod 0.6.0 are in the repos and I know both of those are compatible with my 6th gen.  However, I do have a question about transferring album art.  On Mint, Amarok transfers the album art as I transfer the album, but someone told me that Amarok doesn't do that by default; there must have been a setting changed somewhere to do that.  I looked through Amarok's options and didn't see anything like that... anyone know if I can do that in Arch, and how?  There's no real information about Amarok in the wiki.
    3)  I'm going to use Gnome environment but some KDE apps (like Amarok).  Will there be any problems with that that I should know about in advance?
    I'll probably have some more questions once I actually install Arch, but that will do for now   Those are the important ones.

    1) You know by seeing its dependencies on pacman or by the developer documentation. Unlike other distros, you don't need to install dev packages. Example: You have gcc and all the basic gnu tools installed, and you want to compile a program that is not in a arch linux package yet (if theres a package, you can build it using abs / makepkg and pacman will handle the dependencies) and requires, for example, a library called "xyz", you only need to download "xyz" from pacman, and all the dev stuff will come together (like header and etc).
    I hope it helps.

  • Some questions..

    Hello all,
    I am new to ABAP. I am learning now.I have some questions.
    1. What is the typical structure of an ABAP program?
    2. What are field symbols and field groups? Have you used "component idx of structure" clause with field groups?
    3. What should be the approach for writing a BDC program?
    4. What is a batch input session?
    5. What is the alternative to batch input session?
    6. What are the techniques involved in using SAP supplied programs? Do you prefer to write your own programs to load master data? Why?
    Best Regards,
    Purna

    Hi Purna,
    phew! these are lots of questions. Now let's see:
    4. A batch-input session is a way (but not THE ONLY WAY) to massively process SAP transactions. That means, you store in such a session:
    - what transaction(s) you want to run
    - what screens you visit
    - what fields you fill in each screen
    - what pushbuttons/function codes/ok-codes (that is, what ACTION) you want to perform into each screen
    So all this information is stored in a so-called "batch-input session". A typical example is the batch input session generated by the depreciation transaction (AFAB)
    3. Maybe the easiest way to do so is to record a batch-input session (via SM35 > Record), and then write your ABAP code. You will find very useful a form like this:
    *&      Form  dynpro
    *       Inserta datos de CALL TRANSACTION (o BATCH-INPUT) en
    *       tabla interna
    *      -->_DYNBEGIN  'X' empieza dynpro nuevo; ' ' mismo dynpro
    *      -->_NAME      Si DYNBEGIN 'X' programa; si no, nombre campo
    *      -->_VALUE     Si DYNBEGIN ' ' nº dynpro; si no, valor campo
    FORM dynpro USING _dynbegin
                      _name
                      _value.
      IF _dynbegin = c_yes.
        CLEAR bdc_data.
        bdc_data-dynbegin = c_yes.
        bdc_data-program  = _name.
        bdc_data-dynpro   = _value.
        APPEND bdc_data.
      ELSE.
        CLEAR bdc_data.
        bdc_data-fnam     = _name.
        bdc_data-fval     = _value.
        SHIFT bdc_data-fval LEFT
          DELETING LEADING space.
        APPEND bdc_data.
      ENDIF.
    ENDFORM.                    " dynpro
    ...and then you should call this form once per line generated by the recording of the batch-input session... We all can give you some more details on this if necessary (won't we guys?? :-D)
    5. Well, there are many other tecniques. You mean: for massive upload? For massive execution of transactions? Maybe CATT (Computer-Aided Testing Tool, transaction SCAT) could be used for this purpose as well. Or direct-input, for example if you want to upload material master records. Or just the straightforward INSERTs, but I seriously recommend you NOT to use it O:-)
    6. I personally try to use standard SAP ways to upload master data. Thus you can take advantage of all SAP validations. But of course your legacy system needn't be SAP, so you'll have to develop your own ABAP code in order to perform some sort of "conversion" between the legacy format (mostly a plain text file) and the format expected by SAP.
    Wow! I hope you reached this point. Pls write back if you need some more help. Or if this is enough, pls let us know anyway. BR,
    Alvaro

  • Some question for Sun Java System

    Dear all:
    I use Sun Java System We Server 7.0 for my project.
    But I have some question on SJSWS 7.0
    My server runtime information
    1. OS is RedHat Enterprise 3
    2. memory: 2G
    3. SJSWS version: 7.0 preview 3
    My questions
    Q1. The speed that download image is slower than the speed that run JSP page . The image's size to smaller than 5 KB. Is it possible to speed up download image
    Q2. My machine has 2GB memory, The system use swap area always. I think it isn't enough of SJSWS 7.0. How much memory size that SJSWS 7.0?
    Many thank for everybody

    first off, please try the Update 1 release in stead of technology preview releases: http://www.sun.com/download/products.xml?id=467713d6
    Q1> So I want to ask it is possible to display image and text at the same time like use Tomcat ?
    there is nothing in the server end (whether it's Tomcat or any other web server) that affects how the content is displayed on the browser.
    Q2>I detect The ram was used over 800M~1G by SJSWS7. My website has 80~150 user in workinghour
    Are you seeing Web Server process measuring so much memory?
    Q3> Is it possible that the admin server use SJSWS7U1 and some or all of node use SJSWS preview 3, because the server is running on line, I can't stop them for update server
    No - you want to switch to U1 anyway, since it's a released version (and is absolutely FREE for production use).

Maybe you are looking for

  • How to restore playlists?

    I just updated my iPod, and when it was done updating, I correctly disconnected it, but after doing so my playlists disappeared on iTunes, however they are on my iPod. I don't see any topic-related articles on the website as to why this happened and

  • How to synchronize my Imac at work and macbook at home

    I use a iMac at work and just got a Macbook to use at home. I am wondering if there is a way to make the two computers synchronized to each other, such as folders, files... Also, I want to get the 10.6 for my new Macbook, but my iMac still runs 10.5.

  • Plug 24" LED Display into Mac Pro, Still Charge MacBook Air?

    Just set up my Mac Pro using the 24" LED Display I used to use with my MacBook Air. With the USB and Mini DisplayPort now plugged into the Mac Pro, could I still plug the MagSafe power cable into the Air to charge it in a pinch, or is there any reaso

  • Approval email and Portal UWL

    Dear all I want to configure SRM to send emails to the approvers and have a link to logon in SRM to approve in SRM (online approval).  That is working perfectly if the user connects to SRM. But what I need to put as ITS_DEST attribute or other info t

  • Adding pre-mapping process breaks target load order

    OWB 11.2.0.2 on Oracle database 11.2.0.2 I created a mapping that has 4 sources (views on external tables) and 4 targets (3 regular tables). V1 => T1 (truncate/insert) V2 => T2 (truncate/insert) V3 => T3 (truncate/insert) V4 => T3 (update/insert) The