Need input on data cache

Hi All,
This is the situation..
Data cache = 3mb (default)
Data block size = 20kb
so number of blocks a memory segment (data cache) can hold is 157
What will happen If user is retrieving 300 blocks in one retrieval ?
I know swapping will takes place for multiple retrievals.
I know that agg cache in ASO will increase till OS say no it.
Will Essbase tries to increase data cache....?? if so how it works??
My understanding is Essbase tries to increase data cache and it may touch the wrong segment in memory to create a segment violation to crash!!
As per the modern OS, virtual memory management using pages and segments ...I think this will takes place!!
Please give your inputs!!
Thanks,
Jeeth
Edited by: Jithendra on Mar 28, 2011 7:12 PM
Edited by: Jithendra on Mar 28, 2011 7:14 PM
Edited by: Jithendra on Mar 28, 2011 7:15 PM

Hi,
I just did an initial analysis....
It is always 3mb. (never increased it)!!
XCP log says it stopped at executing the maxl!! ..like (Maxl pending)
and its an segmentation violation!!
from my understanding segmentation violation is something to do with kernal memory allocation, virtual memory and physical memory.
I cannot try the same on prod server (pls don't ask why the settings are default etc ..I'm new to the system)
I just provided an intial analysis telling that at this point of time...the highly visible point to me is data cache in buffered I/o.
The segmentation violation can also happen at total level of availability of RAM.
but in our case we are using only 1/3rd of the Total available RAM.
memory in all applications = (1/3rd of available RAM)
and its a Maxl operation...
by seeing al this i suspected the application cache.(data cache)
I posted only to gather little mor info and to have a debate to get clear understanding.
Thanks for your support and input...
Please advice
Thanks,
Jeeth
Edited by: Jithendra on Mar 29, 2011 1:37 AM

Similar Messages

  • Needs to added default data cache to 1GB

    Hi All,
    When i tried to change defualt data cache getting below prompt.
    1> sp_cacheconfig "default data cache", "1024M"
    2> go
    Msg 10879, Level 16, State 1:
    Server 'TST', Procedure 'sp_cacheconfig', Line 1752:
    The current 'max memory' value '1792000', is not sufficient to change the size
    of cache 'default data cache' to '1024M' (1048576 KB). 'max memory' should be
    greater than 'total logical memory' '2024571' required for the configuration.
    (return status = 1)
    How to add 1GB for default data cahce.
    Please suggest.

    Hello Karthik,
    Increase your max menmory and run the cache config command again it should work.
    Eg:
    1. Please increase 'max memory' by additional X MB
    2. Please increase 'statement cache size' by additional X MB
    3. Both are dynamic parameters, does not require ASE restart.
    Regards
    Kiran K Adharapuram

  • Web Application - Data caching of enterprise data

    Sorry in advance if this is off-topic but I can't find anywhere else to post this type of question.
    I am looking for information/suggestions such as books, technology or design methodology for my enterprise web applications. These sites are currently up and functional using only JSP, servlets and regular Java classes stored in a web application session to provide data caching and access. We are using Weblogic Server 6.1 running on an AIX Unix system at this time. I'm not sure that this is the best design architecture as our web sessions are getting too large but I can't think of any other Java technology to use and I need some help. Here's an overview of our environment and our needs.
    Our core data is held in a mainframe based IMS system. Some DB2 is also used. Access to this data is through IMS COBOL transactions which we can execute with IMS Connect. We also use some JDBC to get to the DB2 tables directly where available.
    Some overall application data is cached when the web application is deployed. We use singleton classes which are created and refreshed at deployment and they then refresh themselves from the sources every 24 hours.
    Each time a user logs in we execute several IMS transactions and JDBC calls to cache user specific data in regular Java classes which are then simply placed in the users web session where we access them from JSP, servlets and other Java classes. The fields in these Java classes range from any type of primitive field to TreeMaps of other Java classes. As the data is cached it is sorted and other fields are calculated and stored in these classes. As the user progresses through the system we then may have to do several other IMS transaction and JDBC calls to collect other types of data. All of this is then also added to the users session. Most of this in inquiry. We do allow transactions but those are built from user input and data already cached and are then we just execute the IMS transactions with the input.
    As our application has grown these Java classes have gotten larger and larger. And since these are simply stored in server memory in the web sessions then these are also getting huge. I'm concerned that this is not the best way for this application to be architected. Is there something else we should be doign? I simply don't understand how Entity Java Beans could be used but then again I don't know much about them. I wouldn't think that caching the data to a local database and accessing it from there would be any more efficient and would probably just slow down the system from all the I/O.
    Any help or direction would be greatly appreciated.

    The best book you can buy is 'Professional Java Server Programming, j2ee edition' by Wrox. It is by far the best reference I've used. Another quick reference consideration might be the j2ee book provided by codenotes... its quick and to the point.

  • Need inputs regarding the dvd drive on primary ide channel.

    Specs:
    Motherboard: MSI P35 Neo3 (MS-7935 1.0)
    CPU: Intel Core2 Duo E6550
    Memory: Team Elite DDR2800 (2x1GB Dual Channel)
    Hard drive: Seagate Barracuda 7200.11 500GB 32MB Cache (SATA)
    Optical drive: LG GSA-H55L (IDE only) Firmware version 1.02
    Graphics card: Gecube HD3870
    Chipset: Intel P35/G33/G31 (Rev. A2)
    Southbridge: Intel 82801IB (ICH9)
    LPCIO: Fintek F71882F
    BIOS: AMI V1.1 (11/07/2007)
    Hello,
    Almost all new motherboards today only have a primary ide channel and the rest are SATA.
    I need inputs regarding the dvd drive which is shown in device manager as located on the primary ide channel while the hard drive is located on the secondary channel. Because I would like to update the dvd drive to the latest firmware (version 1.06; to be able to recognize more blank media) but the LG site recommends that the drive (dvd) be located on the secondary ide channel.
    I already tried to uninstall every channel from the device manager but all would still be the same after reboot, dvd drive on primary, hard drive on secondary.
    Current ide mode in BIOS is set to AHCI+IDE mode, DMA modes are fine (UDMA4 for dvd, UDMA5 for hard drive), boot sequence (1st=HD, 2nd=DVD, 3rd=Floppy drive)
    Tried switching to IDE mode only in BIOS but would not detect the optical drive. Never tried RAID+IDE mode since I only have a single hard drive.
    I'm not sure but if I try to update the firmware, it might instead try to update the hard drive's firmware instead of the optical drive and make the hard drive unusable.
    I would like to know if anybody with this same situation was able to successfully update the optical drive's firmware or is there some way to place the optical drive on the secondary ide channel and the hard drive on primary for me to follow and replicate the process.
    Thank you for any replies.

    Thanks sir NovJoe for the reply.
    Just received the solution from another forum where I posted the same problem.
    They said that the flash program for the firmware will detect the drive on its own and would not flash other devices except the optical drive itself. So its pretty safe, and I can confirm this since I just flashed my ODD a little over a while ago and everything went fine. No errors.
    Though I'm speaking for the brand of ODD I own and may be different for other brands so take precautions as well. It may be different for the others.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • Need Inputs - Creation of webservice in SAP R3 through PI

    Hi Experts,
    Need inputs that my client wants me to  create an webservice in SAP/R3 through PI and they (client) will call it by  their 3rd party software.
    Webservice contain fields like :- Comany_Code, Location_Code,Item _code etc.
    SAP -
    > PI ---> 3rd party
    Is this possible through PI ? Please comment !
    Thanks.

    Hi,
    If you have such kinda requirement , then it is easy to create WSDL(webservice) on PI System..
    Create a normal Xi Interface with normal Steps .....
    Your Sender DataType will be acc to Your requirement as clients want...
    go to tools - Define webservice ... ( will Use Outound Service Interface and namespace)
    read the template and Put values in there . This will genrate a WSDL for you , 
    You Can give this Genertaed , WSDL File to Client to consume in their application and Pass data to it ..
    Hope it helps ..
    Regards
    Prabhat Sharma.

  • Need to POST data from a desktop client to a server.

    Hello all, it's been awhile since I've posted here, so I hope everyone has been doing well.
    I have cross posted this here java - Need to POST data from a client application to my server - Stack Overflow but no answers, and since SO has been extremely slow for the questions I've been asking I am posting here.
    Here is the post:
    I know the title is probably a common question, but I am a bit confused on everything I'm trying to do, so I am trying to piece it together, and figured a common title would be better than a confusing one.
    I am basically developing a web application and one part of that is a file uploader. I am using Apache Commons File Upload via the Streaming API, and that all works fine, except I need to access the file I'm uploading, because that contains data to additional files to upload.  I.e., Read File A, get paths to images, upload images with File A to server and save on server.  The API can be found here http://commons.apache.org/proper/commons-fileupload/streaming.html
    I was told there is a security risk via the web and would be impossible via a browser, since the user needs to select all files to upload, i cannot tell the browser to upload additional files, so I am left with a client side option.
    I am confused if there is a special library I need, or as I have been seeing threads that talk about using the built in UrlConnection Class or http://hc.apache.org/
    I basically need to be able to read the file, which technically gives me a path to a Database on the user's system which I then read to get the additional images.  After I get all of that I then  need to post the data as a multipart form as that is what the FileUpload requires.
    form method="POST" enctype="multipart/form-data" action="fup.cgi">
      File to upload: <input type="file" name="upfile"><br/>
      Notes about the file: <input type="text" name="note"><br/>
      <br/>
      <input type="submit" value="Press"> to upload the file!
    </form>
    This is the example found in the Overview section of the Fileupload which can be accessed from the link above.
    There wouldn't be an issue if the users uploaded all of the data themselves, but since I have to do some of it automatically it causes some "concerns/issues."
    Basically these files are created and packaged from another application, so the images, and the db will always be in the same place, and that file that they are uploading is a file the other program creates, so everything will always be known, I just need to upload it, and then POST it as enctype="multipart/form-data" So that my servlet can read it and save it on my server.
    So I would appreciate it if anyone had any suggestions on where to begin my journey with this.  I have heard of a few applications like curl and wget that are used for this, but those seem to be more C based.  As mentioned earlier it seeems the httpcomponets from apache might work well, but I want to make sure.
    I appreciate all the help, thank you for your time all.

    It's not possible to read from a file without using classes from the core API*. You'll have to get clarification from your instructor as to which classes are and are not allowed.
    [http://java.sun.com/docs/books/tutorial/essential/io/]
    *Unless you write a bunch of JNI code to replicate what the java.io classes are doing.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Hi, I have just loaded Lightroom 5 from the disc onto my Mac.  every time i try and launch the application, it goes to the registration/licence window.  i have input the data five times already.  When Lr5 launches the update window appears, when i try to

    Hi, I have just loaded Lightroom 5 from the disc onto my Mac.  every time i try and launch the application, it goes to the registration/licence window.  i have input the data five times already.  When Lr5 launches the update window appears, when i try to do anything the following message comes up: - An error occrred when attempting to change modules.  What do i need to do to fix this?

    Masher please use the uninstaller located in the Applications/Utilities/Adobe Installers folder.  Once Photoshop Lightroom is removed then please download Lightroom 5.5 from Adobe - Lightroom : For Macintosh : Adobe Photoshop Lightroom 5.5 and reinstall.

  • How to check if table or index bound to data cache loaded in memory?

    In my system, thare are many named data caches created and certain objects bind to those data cache.
    For example, I have a large table mytab which is bound to data cache mycache.
    When I issue a sql like select * from mytab where x=y, run it at first time is is very slow.  run it again and again. then it very fast. it means data loaded in cahce and ready for same query.
    Question:
    1. How can I know if the table mytab has been loaded in data cache mycache?
    2. How to load mytab into data cahce mycache one time for all query in the feature?

    one way to monitor is:
    select CacheName,DBName,OwnerName,ObjectName,IndexID,sum(CachedKB) as "CachedKb"
    from master..monCachedObject
    group by CacheName,DBName,OwnerName,ObjectName,IndexID
    order by CacheName,DBName,OwnerName,ObjectName,IndexID
    But, you should understand that caches have a finite size, and caches contain "pages" of objects including data pages, index pages, and LOB pages.  Also, caches may have different pool sizes, so a page can be in only one cache pool.  So, if you want  a table and all of it's indexes, text/image pages  to be loaded into a dedicated cache, you need a large enough cache to fit all of those pages, and decide which buffer pool you want them in (typically either the 1 page pool, or the 8 page pool).
    Then, simply execute SQL (or dbcc) commands that access all of those pages in the manner you wish to find them in the cache.  For example, two statements, one that scans the table using 2k reads, and another that scans the index (mytab_ind1) using 2k reads.
    select count(*) from mytab plan '( i_scan mytab_cl mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    select count(*) from mytab plan '( i_scan mytab_ind1 mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    etc etc.
    used count(*) to limit result sets of examples

  • Question about MAXCPU and clearing data cache

    Hi all,
    We're running MaxDB 7.6.03.15 and have following questions. I apologize for not grouping them together with my earlier set of questions along the similar lines.
    1. What would be an appropriate value for our application for MAXCPU when running on a quad processor machine. The application consists of the MaxDB along with 4 windows services and an instance of tomcat. Can we use 4 for MAXCPU or does it always need to be something less than the total number of CPUs on the machine (less than 4 here?). Please advise.
    2. When we're doing query execution time analysis we need to be able to clear data cache in between query runs. We achieve this so far by simply bringing the database offline and then online again. The problem with this approach is its time consuming and if we do 100 query runs a lot of time is spent in waiting for the database go from online to offline to online again. Please let us know if there's an alternative faster way to achieve the same.
    Thanks for your time.
    Sincerely,
    Sameer

    > 1. What would be an appropriate value for our application for MAXCPU when running on a quad processor machine. The application consists of the MaxDB along with 4 windows services and an instance of tomcat. Can we use 4 for MAXCPU or does it always need to be something less than the total number of CPUs on the machine (less than 4 here?). Please advise.
    Of course you may set MAXCPU to 4 - but you risk that the four Usertask UKTs will use up all CPU resources on your system then.
    So it is usually better to be rather defensive with this. If you're unsure about the CPU needs of your other services better try out and check starting with MAXCPU 2.
    > 2. When we're doing query execution time analysis we need to be able to clear data cache in between query runs. We achieve this so far by simply bringing the database offline and then online again. The problem with this approach is its time consuming and if we do 100 query runs a lot of time is spent in waiting for the database go from online to offline to online again. Please let us know if there's an alternative faster way to achieve the same.
    Well, x_cons pagecache_release should do that for you.
    But I never was able to really see the effect of it (e.g. reading the pages in again).
    If I'm not mistaken in this point - which is pretty easy because the command is rarely used and sparsely documented - it just moves the pages to the lower end of the lru list and thereby making them candidates for getting moved out of cache but not actually doing it.
    However why don't you just use the resource monitor to figure out how many pages are touched by your statements?
    Then you don't have to bounce your instance.
    regards,
    Lars

  • How to update the Input message data in OSM

    hi,
    I am using OSM 7.0.2.
    can we update input message data before the orchestration execution.
    I want to add some more order line item to the input xml by calling some external webservice.
    Suppose CRM submits order with 3 order line item. Now once we have that in Input message i want to add some more order line item to this input message.
    I dont want to add processing/adapter layer above OSM that does the updation of the XML.
    Thanks in advance.
    Rutvej

    Hi Rutvej,
    You use Order Data Rule to generate data for the creation view of the orchestration order. The source schema of the Order Data Rule is the schema for the incoming message. The output returns the <_root> portion of the creation view. Note that the Order Template already has the Sales Order (look for "order <XML>") and automatically populated by OSM Core. So, what you are trying to do is not to add/change the Sales Order itself, but to first add the desired enriching data model into Order Template, and then use Order Data Rule to populate those enriching data into the Order Template.
    Example of Order Data Rule:
    declare namespace cso="http://xmlns.oracle.com/communications/sce/dictionary/CentralOMManagedServices-Orchestration/CustomerSalesOrder";
    let $customer := //cso:CustomerAccount
    return
    <_root>
         <OrderHeader>
              <AccountIdentifier>{$customer/cso:AccountID/text()}</AccountIdentifier>
         </OrderHeader>
         <EnrichedOrderItem>
              <Data1>Your enriched data here</Data1>
              <Data2>Your enriched data here</Data2>
         </EnrichedOrderItem>
    </_root>
    Before that, you would need to add the OrderHeader and EnrichedOrderItem into the Order Template.

  • Japanese characters alone are not passing correctly (passing like ??? or some unreadable characters) to Adobe application when we create input variable as XML data type. The same solution works fine if we change input variable data type to document type a

    Dear Team,
    Japanese characters alone are not passing correctly (passing like ??? or some unreadable characters) to Adobe application when we create input variable as XML data type. The same solution works fine if we change input variable data type to document type. Could you please do needful. Thank you

    Hello,
    most recent patches for IGS and kernel installed. Now it works.

  • Sysgen : How to read the input port data type, width and rate dynamically in a masked subsystem ?

    Hello everybody,
         I am designing a general purpose block in system generator. I pass the user parameters to the block through masking it. Some user parameters can change the block configuration. The input port data type, width and rate can also affect the block configuration.
         The problem is that these values (input port data type, width and rate) are subject to change. So I should read them dynamically, then change the block configuration through programming the "Initialization Commands" field. But unfortunately there is no straight way to read the input port information.
         There are some methods in for example the "Black Box". these are:
    input_width = this_block.port('din').width;
    input_rate = this_block.port('din').rate;
    But these methods are not applicable to a masked subsystem.
    I have tried other ways also. You can find them below. None of them worked.
    Does anybody know how can I solve this problem?
    Other ways I tried:
    1)
    design_name([],[],[],'compile')                                       
    q=get_param(gcb,'PortHandles');
    get_param(q.Inport,'CompiledPortDataType')
    get_param(q.Inport,'CompiledPortWidth')
    get_param(q.Inport,'CompiledPortDimensions')
    design_name([],[],[],'term')
    2)
    ssGetInputPortDataType
    3)
    ts = Simulink.Block.getSampleTimes([gcb '/Input'])
     

    Today we rely on Simulink to perform parameterization of your designs in two ways:
    Parameterizable Subsystems and Blocks : Parameters themselves can be MATLAB expressions that need to be evaluated for which we need the MATLAB interpreter
    The very useful Rate and Type propagation or Simulink compilation that allows us to specify types & rates in one location that gets systematically propagated to all.
    To truly make the HDL Netlist that is generated from SysGen parameterizable, we would have to implement some of this capability in the HDL netlist itself by:
    Using Generics(VHDL) or Parameters(Verilog) - We would have to capture the bit width(type) propagation through levels of hierarchies and finally parameterize the IP itself based on this value
    Since IP itself does not have this capability through generics, we would have to package a separate tcl script that updates the IP parameterization appropriately in response to top level parameters(or GUI parameters)
    Interpreting MATLAB expressions and translating them into VHDL/Verilog expressions (alternatively tcl expressions of IP). In simulink, mask parameters can be passed from one level to the next. Also parameterization of a block can be composed of Matlab expressions using variables from ancestor masks & the MATLAB interpreter – so we will need to somehow capture that as well.
     

  • Why Excel insert cell block, hasn't input for data input?

    Hi
    I found a block for insert new row in excel, but there is no input for data input. how should I insert data by this block to excel file? can you help me?
    that block name: Excel insert cells
    in report generator toolkit
    Best Reagards

    behzad1 a écrit :
    I could work with  Excel insert cell block, but when I want add new data to an old row continuation, last row shift downward! while I want add data to old row. anyone can help me?
    Nobody will be able to identify the problem without seeing your code. Excel Insert Cells.vi is used to add cells to an existing spreadsheet, not to set the cell value. To do this is a more specific way than the Append Report Text.vi you can use Excel Easy Text.vi or Excel Insert Table.vi. With these vis you can specify the range where you want to insert something.
    For your other question (Two different data types) you can use the Excel Set Cell Format.vi to format a range as a date or something else. You will need to use the Excel format specifiers for this.
    Ben64

  • How to configure params for buffer pool for named data cache?

    when create a named data cache on ase 12.5, it will setup 2K I/O buffer pool by default with Configured size=0, wash size = 60M
    1. if 2K can be changed to 8K for this buffer pool?
    2. If add another 16k buffer pool, should Affected Pool be changed to the right pool?
    3. How to decide then pagesize, configured size and wash size for a buffer pool? Are they part of total memory size allocated for this cache?

    > 1. if 2K can be changed to 8K for this buffer pool?
    You should be able to create an 8K i/o pool, then drop the 2k pool by setting its size to 0.
    > 2. If add another 16k buffer pool, should Affected Pool be changed to the right pool?
    If you don't specify the Affected Pool (when calling sp_poolconfig), the procedure uses the pool with the smallest i/o.  So if you had an 8k but not had dropped the 2k, the space for the new 16k pool would come from the 8k pool.
    > 3. How to decide then pagesize, configured size and wash size for a buffer pool? Are they part of total memory size allocated for this cache?
    The wash is included in the pool. I don't think it usually needs to be adjusted.
    Which page size pools to have will depend on how the cache is used.  Tables with a clustered index that have a lot of range queries will benefit from larger page size pools, as will text/image/java.   Syslogs is said to do well on a 4k pool.
    -bret

Maybe you are looking for

  • Problem with hp deskjet 6940

    I have been using this printer for the past year now and now today all of a sudden it will not print.  When I try to print it says there is one job in the print queue but when I check the queue it says error. I cannot print anything!  What should I d

  • Background updation

    Hi all, I hav a program , in which i am creating a material document( using bapi goods mvt create). and using the generated material document number am updating a custom table. am running this in background. My problem is when i cancell the backgroun

  • Submitting PDF Form to SharePoint 2010 Library

    I've inherited a simple PDF form created with LiveCycle ES2. We'd like to be able to let users Submit the result of the form in a PDF format, to a SharePoint 2010 Library using a Submit button. I've added a button and configured the Control Type to S

  • My downloaded game is not showing up. When I try to download it says already downloaded

    Please help me, my game is not working, I have tried redrafting it, an it does not let me download it again.

  • Clearing Job List

    I found a bug in our FCServer system which was making multiple copies of assets and encoding proxies for each one. This was seriously effecting the workload time on our server and therefore stopping us from working! Fortunately, we have created a res