Need more configuration for reports?

Hi All,
When i start the report (Content Management -> Reports -> Approval Maintenance Report), the report is running, but when i go for Running Reports related link there is no reports are currently running. Do i need to have more configuration for this.
I reward your valuable reply.
Thanks and Regards,
Kalaivani

Hi Kalaivani,
You can see this link from http://help.sap.com (The SAP Help Portal provides web-based documentation for all SAP Solutions. This enables you to search the online library for the right information where and when you need it.)
http://help.sap.com/saphelp_nw70/helpdata/en/07/dad131443b314988eeece94506f861/frameset.htm
This link is from KM Administration Guide. If you want other configuration or documentation, ask me
Kind Regards,
Lubi

Similar Messages

  • Configure for Report writer

    Hi
    I have a ZZZZ transparent table with data, that is not a pool or cluster table, and would want to be able to use Report Writer to search from this table and create RW reports.
    1. Is this possible?
    2. What are the set up tables needed to configure for RW? I could think of only T804A and T804B.
    Does anyone have a soft copy of the steps in doing this? I am going through the SAP Help and it seems that there are steps missing, and I am unable to complete the process of creating a custom Library for this table.
    Thank you
    Priya

    COPA is a mini data warehouse and each company have to define the datasource structure, called profit segment, based on their own profit analysis requirement before they create any PA reports. Since the profit segment could be different from company to company, no univeral PA report can be used for all companies, that is also why SAP normally do not provide any standard reports in KE30.
    So, you have to first define the characteristics, value fields, and profit segment with the characteristics and value fields (KEA0). Then create your own PA reports with the segment you define forms (KE34, KE31).
    Forms are the structures required as the basis for the definition of reports
    Regards

  • I have my own icloud account separate from my family's apple ID for itunes. I need more storage for icloud...do I have to pay a separate $20/ month to get more storage or does the $20 include all members of the plan?

    I have my own icloud account separate from my family's apple ID for itunes. I need more storage for icloud...do I have to pay a separate $20/ month to get more storage or does the $20 include all members of the plan?

    Welcome to Apple Support Communities
    If your iCloud account uses your family's Apple ID, you will pay $20/year for all your family members. If your iCloud account uses a different Apple ID than your family's Apple ID, you will pay only for you

  • What is the best External Hard Drive for a Macbook Leopard 10.5.8?  I need more memory for documents, pictures, videos, and music.  Thanks!

    What is the best External Hard Drive for a Macbook Leopard 10.5.8?  I need more memory for documents, pictures, videos, and music.  Thanks!

    Hi, does your MacBook have Firewire, or just USB?
    FW is far faster if you have that.
    Avoid Bus powered ones.
    http://eshop.macsales.com/shop/firewire/1394/USB/EliteAL/eSATA_FW800_FW400_USB
    USB only...
    http://eshop.macsales.com/shop/firewire/usb/eliteclassic

  • Need more info for the ag34405 DMM VI's

    Hi !
    I'm working with the Agilent 34405a multimeter and made a little VI for data-logging. The problem is, I want to choose between 4.5 & 5.5 resolution and the is no litterature about it. I'm using the NI drivers and VI's made for this application. There is a VI named "Configure measurement.vi" in which you could perhaps select the resolution (as the default value is 0.0001, that would be for 5.5 digits I think). I already tried to change the value from 0.0001 to 0.001 but the instrument still use the 5.5 digit measurement method.
    Does somebody also know if there is more documentation as the contextual help of each VI made for the ag34405a DMM (there are more or less only default values explained, but I would need more...)?
    I've attached my VI if this could help to solve this issue. Am working with LabView 8.20.
    In advance, thanks for your help !
    Yves
    Attachments:
    Start_stop_logging_ag34405a_V1.vi ‏47 KB

    Yves,
    One of the things that is great about instrument drivers is the context help on each of the controls. You can turn the context help on with Ctrl+H and then hover you mouse cursor over the control. If you open "Configure Measurement.vi" and look at the help for the absolute resolution control you'll see a good description that includes:
    Notes:
    (1) The instrument driver ignores this parameter if the Range parameter is set to AG34405A_VAL_AUTO_RANGE_ON (-1) .
    If you look at the help of the Range parameter you'll find see a similar discription with details and the following:
    Defined Values:
    AG34405A_VAL_AUTO_RANGE_OFF (-2)   (-2.0)  - Auto-Range Off
    AG34405A_VAL_AUTO_RANGE_ON (-1)    (-1.0)  - Auto-Range On
    Based on this, can you try setting the auto-range to -2 and specifying the absoulte resolution?
    Regards,
    Kamran

  • WVRS4400N wireless router - What do i need to configure for VPN software to work?

    Hi,
    My VPN software can't establish connection ever since i changed to the WVRS4400N router.  What do i need to configure inorder to establish a VPN connection to the outside network?
    On a side note, i am not sure if i am on the right track but i read a little bit about setting IPSec Pass-Through.  If this is the case, do i really need to get all the remote connection information inorder for my VPN software to work?  And under Key Management setting, what would my pre-shared key use for (assuming this is a key i generated)?
    Better yet, how can i get the vpn software to work with only having access to the internet without going through alot of hassle???
    Thanks ahead for any solutions given...

    Thank you for responding.
    If i am not mistaken, i believe what you are suggesting is setting up a VPN connection to my local area network and use the bundled Quick VPN client software to connect.
    I currently have Sentinel VPN software on my laptop and it is configured to have access to my work's network. In most cases, i am able to connect to work without a problem as long as i have internet connection. Apparently, my vpn doesn't work when i am running it behind the WVRS4400N router. This problem didn't occur with my previous router. I believe the WVRS4400N router is blocking the connection by default. What can i do or configure to resolve this?

  • What all things I need to install for Report Services?

    I just want to use Oracle 9IAS Release 2 for Reports purpose.Along with RDF reports I nee to use the report service for JSP reports also.
    So just I want to know what all things I need to install from Oracle 9IAS CD?
    Any suggestion.........

    2ManyDogs wrote:Read the Beginners' Guide.
    +1
    I've used Linux for several years, but only recently installed Arch for the first time. I installed Bridge Linux, Chakra, and later ArchBang, and after getting a feel for pacman (I think it's great!) and, with Bridge and ArchBang, the Arch repos, I was convinced that I wanted to go ahead with an Arch installation. Not sure if this is an option for you or not, but I actually tried two "test" installations on a spare computer first, taking detailed notes, with tabs in my browser (on another computer) opened to the Beginners' Guide and the "official" Installation Guide. I also kept the Bridge and ArchBang installations; kinda nice to be able to take a look at them sometimes to see how things are set up there. I don't know if Arch users would recommend this type of approach, but it has worked out well here.

  • Need help configuring for POF

    I am trying to use POF to serialize one specific named cache only. The client nodes are configured for near caches with no local storage. I ran into a problem where I got error log complaints that another node in the cluster was not configured for POF serialization for the DistributedCache service. So, I created a new service PofDistributedCache service for use by the pof cache. That changed my errors but didn't get me very far.
    Q1: If I have mixed pof / non-pof caches, to they need to use different DistributedCache services?
    Q2: Does the server (back-cache) also need a <serializer> block?
    Q3: Does the server need all the object classes and the classes needed to (de)serialize the objects?
    --Larkin
    Client side coherence-cache-config.html:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>pof-*</cache-name>
                   <scheme-name>default-near-pof</scheme-name>
                   <init-params>
                        <init-param-name>front-size-limit</init-param-name>
                        <init-param-value system-property=foo.coherence.default.front-size-limit">0</init-param-value>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>default-near</scheme-name>
                   <init-params>
                        <init-param-name>front-size-limit</init-param-name>
                        <init-param-value system-property="foo.coherence.default.front-size-limit">0</init-param-value>
                   </init-params>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <near-scheme>
                   <scheme-name>default-near</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <scheme-ref>default-distributed</scheme-ref>
                        </distributed-scheme>
                   </back-scheme>
              </near-scheme>
              <near-scheme>
                   <scheme-name>default-near-pof</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <scheme-ref>default-distributed-pof</scheme-ref>
                        </distributed-scheme>
                   </back-scheme>
              </near-scheme>
              <local-scheme>
                   <scheme-name>default-local</scheme-name>
                   <high-units>{front-size-limit 0}</high-units>
              </local-scheme>
              <!--
                   This config file is for client use only. The back-cache will not
                   provide any local storage to the cluster.
              -->
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <local-storage>${coherence.back-cache.storage}</local-storage>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>default-distributed-pof</scheme-name>
                   <service-name>PofDistributedCache</service-name>
                   <local-storage>${coherence.back-cache.storage}</local-storage>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Server side coherence-cache-config.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>pof-*</cache-name>
    <scheme-name>default-distributed-pof</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>default-local</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <distributed-scheme>
    <scheme-name>default-distributed-pof</scheme-name>
    <service-name>PofDistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>default-local</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <unit-calculator>BINARY</unit-calculator>
    <scheme-name>default-local</scheme-name>
    </local-scheme>
    </caching-schemes>
    </cache-config>

    Hi Larkin,
    llowrey wrote:
    I am trying to use POF to serialize one specific named cache only. The client nodes are configured for near caches with no local storage. I ran into a problem where I got error log complaints that another node in the cluster was not configured for POF serialization for the DistributedCache service. So, I created a new service PofDistributedCache service for use by the pof cache. That changed my errors but didn't get me very far.
    Q1: If I have mixed pof / non-pof caches, to they need to use different DistributedCache services?Yes. You can control POF/old-style-serialization on a service by service basis only.
    Q2: Does the server (back-cache) also need a <serializer> block?It is not relevant on near-cache. The scheme defining the back cache (and invocation services and replicated cache schemes) need to have the serializer specified.
    Q3: Does the server need all the object classes and the classes needed to (de)serialize the objects?
    If you want to deserialize the objects, then certainly they do. But with POF you don't necessarily need to deserialize entries from partitioned caches to define indexes or run entry-processors/aggregations on them. You can leverage PofExtractor-s and PofNavigator-s to do all your server-side logic, although for complex data access it may be less efficient. You do need the key classes (on cache NamedCache caller side) for being able to do operations on a partitioned cache, though.
    Best regards,
    Robert

  • Need hardware configuration for server

    Hi All,
    Need server hardware configuration for deploying and maintaining SSAS cube. Consider OLTP size as 100 GB
    Thanks,
    ATRSAMS

    Hi Atrsams,
    According to your description, you need some hardware recommendations for deploying and maintaining a SQL Server Analysis Services cube, right?
    Though this isn't a sizing recommendation, I'd encourage you to buy as much memory as you can afford. This is generally the choke point I've seen on servers, especially when using SSAS. For the detail information about it, please refer to the links below.
    http://sqlblog.com/blogs/marco_russo/archive/2013/02/12/hardware-sizing-guide-for-ssas-tabular.aspx
    http://www.experts-exchange.com/Hardware/Microsoft_Hardware/Q_27781248.html
    Regards,
    Charlie Liao
    TechNet Community Support

  • Email configuration for report

    Hi,
    I have already configured the Email configuration using SCOT.I able to send a mail to end user sucessfully.The thing is that the functional consultant generate the report every day 8.00pm using background job.This report automaticaly send to one end user through email.Is possiable to configure in sap?If it possiable how to configure?
    Please advise me.
    Regards,
    Kumar.

    Yes, its possible.
    You need to configure SCOT to send mails to external addresses.
    Read,
    http://help.sap.com/saphelp_nw04s/helpdata/EN/af/73563c1e734f0fe10000000a114084/content.htm
    Regards
    Juan

  • Need more information for ADT

    Hi Guys,
                I need more information about ADT. I am new to ADT.So will u plz provide the docs related ADT?
                  Thanks in advance.
    Best Regards,
    Purna.

    It is good mobo , unless unscheduled (my)problem (bad sata cables) ALL is fine!
    Recommendation.

  • Do i need more memory for mountain lion?

    I'm trying to update to Moutain Lion but get a pop-up box saying that I need more memory.  Is this something I need to buy and install?

    Mountain Lion requires a minimum of 2 GB of RAM however most users recommend a  4GB or more. Installing RAM is extermely simple and yes you need to buy it. We would need to know the exact model of iMac you own in order to advise which RAM your system needs. I'd recommend contacting a trusted vendor such as OWC (www.macsales.com) or Crucial or your local AASP, each can assist you.

  • HT4623 need more storage for update

    Best things to delete for more storage for update?

    Any movies or TV shows transferred from your iTunes library take up quite a bit of storage space and can be re-transferred after the update has been installed. If your iPhone's Camera Roll includes a lot of photos/videos, import those with your computer followed by deleting from the Camera Roll after the import process is complete. The imported photos/videos can be transferred back to your iPhone via the iTunes sync process after the update has been installed.
    Delete apps rarely if ever used which can be reinstalled after the update has been installed.

  • Need more storage for music/photo/dv film

    I own a MacBookPro, AirPort Time Capsule and PC/laptops with windows xp and 7/8. We all share a printer using the TimeCapsule. But; I need ekstra storage for my pictures and our music. (all stored in my Mac) I would love to move our CD collection and DVD colletion too - but that is not the main issue. I need space to save all my pictures and video/films. I have read you could use parts of the TimeCapsule as an external harddisk but it is not recommended. I do use backup elsewhere as well - JottaCloud - but would like to keep it stored inhouse - for easy access when I have the time to edit pictures and make imovies ie. But what kind of external harddisk should I get??  Please advice.

    Travis,
    Two things. Apple lossless files are huge in comparison to compressed files such as mp3 or aac. You should convert those lossless files to either mp3 or aac at a bit rate of (say) 160kbps. This will reduce the amount of space used on your computer. Once you've done this, delete the lossless files.
    VBR mp3 isn't a hard drive, it's a compression format, and VBR means variable bit rate.
    See this.
    <a href="http://docs.info.apple.com/article.html?artnum=301509'>Import, and Convert functions</a>
    And this.
    <a href"=http://docs.info.apple.com/article.html?artnum=93123">How to convert a song to a different file format

  • Changing JDBC Datasource Configuration for Report with Sub reports at once

    The Env  details are as follows
    CR Developer
    Version 14.0.2.364 RTM
    We are using JDBC Connection Datasource for our CR2011 report which contains 30+ sub reports. Each of the sub report uses a JDBC Datasource to connect to Postgres database and Since the JDBC connection string changes on each environment we need to edit each sub report every time we switch environments.
    Is there a easy way to accomplish this on all sub reports at one shot?
    I am aware that if we use ODBC connection it would be easy since we can just change the DSN config and it will start working. But we are not using ODBC connection since We are seeing that our report (with too many sub reports) crashes when we use ODBC driver for Postgres.
    Any help/suggestion would be appreciated.

    Hello,
    CR also has a fully support Java Reporting Engine. If you have Java developers available check out this forum:
    SAP Crystal Reports, version for Eclipse
    You can find more info and samples from here:
    http://wiki.sdn.sap.com/wiki/display/BOBJ/BusinessIntelligence%28BusinessObjects%29+Home
    And help.sap.com for the SDK reference material.
    Don

Maybe you are looking for