Max limit job servers with bods

Under Tools > options > job server > environment>
max number of engine processes= 8
what does that mean, does it mean this BODI can run 8 job servers parallell?
Are there a max limit of job servers?
thank you very much for the helpful info.

Michael, good post .... follow-up question on this:
Per your replay if I have a job that has 2 dataflows that run in parallel I should be able to see 3 processes being launched.
And on my linux sever using "ps -f -u myuser" I would see processes like:
/app/bods/sapbo/dataservices/bin/al_engine -PLocaleUTF8<snip>
/app/bods/sapbo/dataservices/bin/al_engine -C/app/bods/sapbo/dataservices/log/command<snip>
/app/bods/sapbo/dataservices/bin/al_engine -C/app/bods/sapbo/dataservices/log/command<snip>
Correct?
Mike

Similar Messages

  • DI Job servers were terminated randomly on HPUX

    Hi there,
    I have a strange problem with a DI 11.5.3 on an HPUX PA-RISC server. There are 21 Job servers with each there are 4 or 5 repositories attached. Every couple of days, 1 or 2 Job servers are misteriously terminated. The servers crashed seem to be random, ie could be Job server 1 yesterday, and Job server 21 today. And it could be a Job server that is running a batch job, or one without running any job at all. Sometimes there is a core dump but sometimes there is none.
    The HP box is an extremely powerful server with plenty of CPUs and memory. After monitoring the system closely with Glance, I found out that Job servers tend to crash when the disk utilization stays 100% for a long time which may be caused by some long running database scripts. It won't happen immediately, but more Job servers start to fail the longer this happens. For example, one Job server may fail 30 minutes after this happens, and another will fail after 2 hours. It's unpredicable when and what Job servers will crash. My question is why the Job server process is killed, not other application processes on the server? Is it killed by the OS or the DI background service? If OS, is it because they are of a lower priority or something else?
    Thanks,
    Larry

    It's strange that there is no service_eventlog.txt file in $LINK_DIR\log. There is "error_JobService.log" but no error message was logged there for the crash occurred this morning (yes, it happened again). I did find the following lines in the "server_eventlog.txt" under the Job server directory which crashes.
    (11.) 02-20-09 07:12:26 (2729:0001) JobServer:  BODI-850003:AddChangeInterest: job change interests added.
    (11.) 02-20-09 07:12:27 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:12:27 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:13:27 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:13:27 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:13:27 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:13:28 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:14:28 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:14:28 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:14:28 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:14:28 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:14:29 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:15:29 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:15:29 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:15:29 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:15:29 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:16:29 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:16:29 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:16:30 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:16:30 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:16:30 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:17:30 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:17:30 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.13.114:5002> with message type <16>
    (11.) 02-20-09 07:17:31 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.40.37:5002> with message type <16>
    (11.) 02-20-09 07:17:31 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>
    (11.) 02-20-09 07:18:31 (2729:0007) JobServer:  BODI-850216:Send notification thread fails sending notification. Could not send notification on time..
    (11.) 02-20-09 07:18:31 (2729:0007) JobServer:  BODI-850170:Sending notification to <inet:192.168.143.50:5002> with message type <16>

  • Steps max limit  in a Job Chain

    Hello experts,
    The SAP recommanded that the minimum number of steps in Job Chain for performance purpose.
    Can anybody tell , Is there any max limit on the number of steps in a Job chain?
    Thanks ,
    Suresh Bavisetti

    Can you reference the documentation where you see a minimum number of steps in a job chain recommended? I think there is no upper limit for the number of steps in a chain. How many steps are you thinking of adding?
    Or are you talking about nesting job chains within other job chains? Pleas eclarify if this is the case.
    Rgds,
    David Glynn

  • SAP BW structure/table name change issue with BODS Jobs promotion

    Dear All, One of my client has issue with promotion of BODS jobs from one environment to another. They move SAP BW projects/tables along with BODS jobs (separately) from DEV to QA to Prod.
    In SAP-BW the structures and tables get a different postfix when they are transported to the next environment.  The promotion in SAP-BW (transport) is an automated process and it is the BW deployment mechanism that causes the postfixes to change.  As a result with the transport from Dev to QA in SAP-BW we end up with table name changes (be they only suffixes), but this does mean that when we deploy our Data Services jobs we immediately have to change them for the target environments.
    Please let me know if someone has deployed some solution for this.
    Thanks

    This is an issue with SAP BW promotion process. SAP BASIS team should not turn on setting to suffix systemid with the table names during promotion.
    Thanks,

  • GTX 970 4g OV max limit...any problem with that??

    Hi, my 970 stock voltage is 1,212v and the ASIC is 65,4% (quite low) and vbios 84.04.2f.00.f1 ...in afterburner, despite there is a slider from 0 to +87v it seems like there is only 3 steps in voltage increase...for me they are:
    +0=1,212v
    +0 to +25mv=1,237v
    +25 to +87mv=1,256v
    -Is it SAFE to run the card at 1,256v 24/7 (obviously only when gaming)???
    -Why the slider is not applying the voltage in increments as slider indicates??, I mean, +37mv should be = +37mv voltage increase right??
    -And the most important thing...ONLY when voltage reach 1,256v I see in the graphs that the OV MAX LIMIT = 1 all the time (or most part) so I wonder is its SAFE to run the card with ov max limit =1???...according with the 1,256 I think there is lots of people running the card this way but not sure if ov max limit =1...
    Finally, I can run the card at 1486Mhz core (+170) and 3507Mhz memory (+0) at +25mv (1,237v) without the ov max limit at 1 should I leave the card just there???
    Many many thanks guys!!

    Quote from: watermanpc85 on 28-April-15, 19:22:29
    Thanks for the reply...I dont know hopw to upload the actual bios file but I recently updated the bios to the .186 version (latest)...now it says 84.04.36.00.f1.
    The vbios version is NV316MH.186 the numbers from gpu-z (84.04.36.00.f1) are irrelevant and do mean nothing.
    Quote from: watermanpc85 on 28-April-15, 19:22:29
    So I asume that there is not any problem by running the card with OV max limit=1 right???(as soon as the voltage is 1,256v)...anyway right now is a non sense for me because as soon as I increase voltage to 1,256v the card reaches max TDP limit and then it throtles...I would need to increase the TDP limit then...I have researched in the net but I cant find I CLEAR solution on how to increase the little 10% default...is there any "easy" way to do it??
    It just means that you reached the point where no more increases are possible.
    For increasing power limit further than is allowed by default a vbios mod is required that comes with a risk and voids warranty. If you are not familiar with vbios mods than there is no safe and easy way.

  • Server Group not working when one of Job Servers is down

    I have a Server Group of two job servers. They have the same version 12.2.2.3 and are attached to the same Local Repository of version 12.2.2.0.
    When I execute a batch job (from the Designer) and explicitly specify on which job server it should run, it works fine for either job server. Also, when I specify Batch Job execution on Server Group, it works fine.
    However, when I shutdown one of the Job Servers, and then try to execute the job on Server Group, I'm getting two error messages, BODI-1111011 that one Job Server is down, and BODI-1111009 that another Job Server has failed.
    At the same time, when I check the Server Group in the Management Console, it shows that the allegedly failed Job Server is in the status green.
    That error is not reflected in a job server eveng log, nor there is anything written to webadmin log, not in the job trace (the latter isn't created at all).
    Is there anything I can do at this point except raise a support message?

    The issue was with different users for Local Repository in Admin and Job Server config. I discovered it when trying to run the job from Admin Console. Designer is probably not the best diagnostic tool for such kind of issues.

  • A problem about the "Max running jobs". Thanks a million!

    Hello everyone,
    The "Max running jobs" was set to 15.
    (1). When I dragged two assets into FCSvr at a time, in the "Search all jobs" window, I saw the first one being uploaded then transcoded, and later the second one being uploaded then transcoded.
    (2). When I dragged them into FCSvr one by one, I saw them being uploaded, then the first one being transcoded, and the second one later.
    Thus with (2), there is no need to wait or queue between the uploading of the second one and the transcoding of the first one.
    However, I intend to drag the assets at the same time, besides, I wanna attain the aim in (2) that no queue to wait. Could someone tell me how to do it?
    With best regards,
    Steven Lee
    PINZ Media

    If I understand, you used for upload menu "Upload file..."
    For uploading and conversation in same time many files you can setup production action (response) for some folder on your device (XSAN, RAID, HDD)
    This action can create production for folder, and create assets for media in folder. The add action "SCAN" in response submenu. For transcoding many assets in same time check radio button "Background Analyze" in response "SCAN".
    You can read more about that in manual.

  • FBZP different Max limit and amout limit in bank determination

    HI:
    FBZP there is "Maximum limit" in company code level/payment method
    what is differnt with "Amount limit" / Availble for outgoing payment in Bank determination.
    I have try to create invoice with payment method which exceed  max limit at company code or at bank determination, but system still select my invoice to process.

    Hi Mandy,
    Maximum amount at the level of Co. Code/Pmt method means, that system at the time of creating payment documents will not pick up any payment which exceeds this amount.
    For example, you you have a payment method "Cheque" and the organization wants to set up a maximum limit for the cheque payment, then you can specify maximum amount here and F110 will not issue any cheque exceeding this amount.
    Available amounts is maintained at the Bank account level( I have explained this in a previous thread you had raised)
    So, in short, you can say, maximum amount is for a single payment document that is created and available amount is the amount that is available to make all the payments from the proposal list.
    For your testing : Check whether the Payment method is specified at the item level in your posted document, if yes, remove it and try again. If you read SAP help on this field it says -
    " Items in which the Payment method has already been specified are excluded from this.In these cases, the payment method specified is also valid if the maximum amount is exceeded"
    Regards,
    Kavita

  • Material Transaction sequence reached max limit in Version 11.0.3

    Problem Desc:
    Oracle Applications Version 11.0.3
    Material Transaction sequence reached max limit.
    Material Transaction sequence got exhausted.
    All the material transacted got stuck in AQ Process.
    Material transactions done are pertaining to PO Receipts, Sub Inventory transfer, Inter Org transfer,
    Could you please advice a solution.
    Thanks and Regards
    Aditya

    Hello,
    I had similar problem standard seq reached max limit. I could not book sales order. Solution from metalink was simple alter seq.
    http://docs.oracle.com/cd/B13789_01/server.101/b10759/statements_6014.htm
    MAXVALUE
    Specify the maximum value the sequence can generate. This integer value can have 28 or fewer digits. MAXVALUE must be equal to or greater than START WITH and must be greater than MINVALUE.
    Regards,
    Luko

  • Heap max limit

    Hello, I have done some tests to check the heap size of an application.
    This is my test code:
    public class Main {
         public static void main(String[] args) {
              Runtime runtime = Runtime.getRuntime();
              // Max limit for heap allocation.
              // It's in bytes.
              long heapLimitBytes = runtime.maxMemory();
              // Currently allocated heap.
              // It's in bytes
              long allocatedHeapBytes = runtime.totalMemory();
              // Unused memory from the allocated heap.
              // It's an approximation, it's in bytes.
              long unusedAllocatedHeapBytes = runtime.freeMemory();
              // Used memory from the allocated heap.
              // It's an approximation, it's in bytes.
              long usedAllocatedHeapBytes =
                   allocatedHeapBytes - unusedAllocatedHeapBytes;
              System.out.println("Max limit for heap allocation: " +
                        getMBytes(heapLimitBytes) + "MB");
              System.out.println("Currently allocated heap: " +
                        getMBytes(allocatedHeapBytes) + "MB");
              System.out.println("Used allocated heap: " +
                        getMBytes(usedAllocatedHeapBytes) + "MB");
         private static long getMBytes(long bytes) {
              return (bytes / 1024L) / 1024L;
    }Then I run this program with the option -Xmx1024m, and the result was:
    On windows: Max limit for heap allocation: 1016MB
    On HP-UX: Max limit for heap allocation: 983MB
    Someone knows why the max limit is not 1024MB as I requested?
    And why it shows a different value in windows than in HP-UX?
    Thanks
    Edited by: JoseLuis on Oct 5, 2008 11:29 AM

    Thank you for the reply
    I have checked and the page size in windows and HP-UX is 4KB.
    Also, in the documentation for the -Xmx flag it says that the size must be multiple of 1024 bytes and bigger than 2 MB.
    I may understand that the allocated size can be rounded to the nearest page (4 KB block), wich give you a difference less than 1 MB between the requested size and the real allocated size, but in windows the the difference is 8 MB and in HP-UX the difference is 41 MB. It's a big difference.
    Am I missing something?

  • Destination and publication job servers - missing

    Hi,
    Hi the destination and publication job servers seem to be missing. I need them to setup "Publication via SNC with SAP BW." Am I OK to set them up as follows in CMC?
    DESTINATION JOB SERVER - Service Category: Core Services, Service: Destination Delivery Scheduling Service
    PUBLICATION JOB SERVER - Service Category: Core Services, Service: Publication Scheduling Service
    Also, do I need to include any of the additional services when creating the servers?
    Thanks

    Seems that it is correct.
    But wondering, how come the servers are missing.
    You can add those core services.
    You can also check by creating new SIA node using CCM. You can try that too.
    Regards
    Gowtham

  • Job Cancelled with an error "Data does not match the job def: Job terminat"

    Dear Friends,
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job   Started                                                                               
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)                           
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.                                   
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.                                    
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)  
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)    
    8.Job cancelled after system exception
    ERROR_MESSAGE                                                
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised. 
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?
    Edited by: dharmendra gali on Jan 28, 2008 6:06 AM

    Dear Friends,
    _Urgent : Please work on this ASAP _
    The following job is with respect to an inbound interface that transfers data into SAP.
    The file mist.txt is picked from the /FI/in directory of the application server and is moved to the /FI/work directory of application server for processing. Once the program ends up without any error, the file is moved to /FI/archive directory.
    The below are the steps listed in job log, no spool is generated for this job and it ended up with an error "Data does not match the job definition; job terminated".Please see below for more info.
    1.Job Started
    2.Step 001 started (program Y_SAP_FI_POST, variant MIST, user ID K364646)
    3.File mist.txt copied from /data/sap/ARD/interface/FI/in/ to /data/sap/ARD/interface/FI/work/.
    4.File mist.txt deleted from /data/sap/ARD/interface/FI/in/.
    5.File mist.txt read from /data/sap/ARD/interface/FI/work/.
    6.PD-DKLY-Y_SAP_FI_POST: This job was started periodically or directly from SM36/SM37 (Message Class: BD, Message Number : 076)
    7.Job PD-DKLY-Y_SAP_FI_POST: Data does not match the job definition; job terminated (Message Class : BD, Message No. 078)
    8.Job cancelled after system exception
    ERROR_MESSAGE
    Could you please analyse and come up about under what circumstance the above error is reported.
    As well I heard that because of the customization issues in T.Code BMV0, the above error has raised.
    Also please note that we can define as well schedule jobs from the above transaction and the corresponding data is stored in the table TBICU
    My Trials
    1. Tested uplaoding an empty file
    2. Tested uploading with wrong data
    3. Tested uploading with improper data that has false file structue
    But failed to simulate the above scenario.
    Clarification Required
    Assume that I have defined a job using BMV0. Is that mandatory to use the same job in SM37/SM36 for scheduling?
    Is the above question valid?

  • How to create reports servers with the same name in two nodes in Reports

    Greetings
    We are migrating Oracle Application Server 10g (9.0.4) to a better hardware infrastructure with high availability. We want to provide 2 new Oracle Application Server 10g (9.0.4) in High availability and we want to avoid modify the existing forms and reports code, but the code is looking for a specific reports server name when calling reports, but I couldn't create the same reports server name in the both oas machines, I could create the reports server in only one node. I want to create a reports server with the same name in both nodes but it is not possible. I know that there is a procedure to do that in 10.1,2 (Note 437228.1, How to Create Two Reports Servers With the Same Name in the Same Subnet) but I couldn't find the equivalent procedure in 9.0.4.
    Anybody kwows how to create a reports server with the same name in two nodes using 10g (9.0.4)
    Thanks
    Ramiro Ortiz.

    Hello.
    I applied the patch 4092150 on my oas 9.0.4.2 and I modified my rwnetwork.conf file changing the port to 14022 by following the note "How to Create Two Reports Servers With the Same Name in the Same Subnet? [ID 437228.1]" but I am facing the same error rep-56040 "server already exists in the network".
    When I run osfind command I get the following information (My reports server is senarep and I want to create it on SNMMBOGOAS10):
    osfind: Found 2 agents at port 14000
    HOST: SNMVBOGOAS10.sena.red
    HOST: SNMVBOGOAS09.sena.red
    osfind: There are no OADs running on in your domain.
    osfind: There are no Object Implementations registered with OADs.
    osfind: Following are the list of Implementations started manually.
    HOST: SNMVBOGOAS10.sena.red
    REPOSITORY ID: IDL:oracle/reports/server/EngineComm:1.0
    OBJECT NAME: senarep2
    REPOSITORY ID: IDL:oracle/reports/server/ServerClass:1.0
    OBJECT NAME: senarep2
    HOST: SNMVBOGOAS09.sena.red
    REPOSITORY ID: IDL:oracle/reports/server/EngineComm:1.0
    OBJECT NAME: senarep
    REPOSITORY ID: IDL:oracle/reports/server/ServerClass:1.0
    OBJECT NAME: senarep
    Any Ideas?

  • Same old "how to limit history" question with FF 27.0.1

    Windows 7 Pro., FF 27.0.1, Add-Ons Adobe and Flash (and it's auxiliaries)...just the basics. My box is a server as well as used for more ordinary tasks...email, browsers, document editors, etc, so it is always "on" and never in "idle." Therefore the add-ons that do these things don't do anything on my box.
    This is, I know, is an old question, but unresolved, in my opinion. I revive it from this post:
    https://support.mozilla.org/en-US/questions/799503?fpa=1
    First let me say I consider Firefox the best browser period. I like Mozilla's and Firefox's essential philosophy of user customization and control, while respecting their need to insure security and stability.
    OK, the question is obvious from the title, How can I limit the history with Firefox? I have observed more recently where the enabling and disabling of JavaScript from the preferences options was removed. I read the Bugzilla and Firefox rationale and it was not difficult for me to open a about:config tab next to a tab containing a page I was working on so I could see how the page rendered without JavaScript. Once you knew how, it was only slightly more difficult than using the preferences options...so that does not really bother me.
    The limiting of history is implied in the browser itself where, if you open up the history window, you can see your history and (for me) see an "older than six months" option which if you mouse over and right click will get a "delete" option. I think everyone who tries this crashes their browser and if anything is removed when you finally get it back up, it will be your cache.
    I know that the amount of history is set in about:config in the setting "places.history.expiration.transient_current_max_pages" as an integer. From reading many articles here in "support", and from personal trial, I find that though that number can be increased, it can't be decreased. If one does, it reverts back to the old setting as soon as Firefox is re-opened.
    I have read the blog referenced in the first link above: http://blog.bonardo.net/2010/01/20/places-got-async-expiration and it even seems outdated, containing references to places.history.expiration.max_pages, which is not the file in about:config in 27.0.1
    In https://bugzilla.mozilla.org/show_bug.cgi?id=643254 I read the debate/discussion about these changes where Marco Bonardo steadfastly holds to his position while I find the comment by al_9x discussing the issue with him; "You have removed functionality that people use and like, that's been in Fx from its inception. It is you who need to justify its removal. And your justification of "nobody really wants to expire history" is a lie, people do and I explained why", to represent my feeling completely. I have no need or desire for history longer than 3 months. It is high-handed to limit Firefox users ability to limit their history. And no...I did not create yet another Mozilla account and "vote."
    There must be some way I and other users can limit (or increase) their history. So I am asking "How"? If it is currently impossible, by design itself, to do so then I find that a very disturbing trend away from the whole philosophy of user control and customization.
    If I am missing something, please inform me. Please *don't* send me a bunch of the standard "help" articles, for I have read them all. So that's my question.
    Regards,
    Axis

    Thank you cor-el!
    I have added the new integer places.history.expiration.max_pages and when I added it places.history.expiration.transient_current_max_pages changed its value to equal what I had put in places.history.expiration.max_pages.
    This is all I needed, I believe. I did not know I was to ''create'' the new integer, now I know how to do so and to your credit you I would like to say to all the Firefox users that this is the best solution I have read to the oft asked question, "How Can I Limit my History in Firefox."

  • 2 Front End Servers with reporting enabled in topology but only one server shows reports

    Hi,
    We have 2 Front End Standard pool servers with resiliency enabled between them. The monitoring service is configured in the topology so that both Front End Servers point to the same monitoring database. We have half our users homed on 1 Front
    end and half the other users sitting on the other Front End. In the Reports we can only see information on Audio calls made from users that are sitting on our Primary Front End Pool Server however we cannot see call information from users that are sitting
    on the other front end server?
    Thanks, Kevin

    We have applied the backend updates and verified the config by running Test-CSDatabases cmd-let . I ran enable-cstopology and ran bootstrapper.exe and rebooted both Front End Servers, however we are still unable to see monitoring data of users
    homed on our second Lync Front End server.
    Any other suggestions? Thanks for responses so far, appreciate the assistance!
    DatabaseName               ExpectedVersion            InstalledVersion
    xds                        10.13.2                   
    10.13.2
    lis                        3.1.1                     
    3.1.1
    rtcxds                     15.13.1                    15.13.1
    rtcshared                  5.0.1                      5.0.1
    rtcab                      62.42.2                    62.42.2
    rgsconfig                  5.5.1                      5.5.1
    rgsdyn                     2.2.1                     
    2.2.1
    cpsdyn                     1.1.2                     
    1.1.2
    and ......
    DatabaseName               ExpectedVersion            InstalledVersion
    LcsCDR                     39.82.2                    39.82.2
    QoEMetrics                 62.90.1                    62.90.1

Maybe you are looking for

  • 'Disk is full' error when saving to windows share

    All, one of my user is using Macbook pro with office for mac 2011 installed on it. when ever user modifies a file from network share and tries to re-save them he gets the error saying 'disk is full'. the network share has around 800GB of free space a

  • Has anyone solved slow syncing?

    When I sync my iPad with my 27" iMac it is like a glacier. Friends do not have the same problem and I do not have this problem with my iPhone.

  • Photo sharing access

    When I try to access photo sharing from apple tv tv I cannot access because apple tv says AT&T conditions have changed. I read the suggestion from apple to accept the new terms e conditions from pc or iPad but neither pc or iPad ask me to accept the

  • Hard drive upgrade...now some programs aren't running properly

    I've recently upgraded my hd...Now my iphoto and photoshop aren't running properly. My photoshop elements tries to boot up then says that it can't boot becuase the disk is not available. My iphoto won't recognize any photos when a camera is trying to

  • Oraxsl error  XSL-1009

    while using the command line utility oraxsl, I recieved the following error: C:\MSDS>java oracle.xml.parser.v2.oraxsl msds_water.xml msdsxsl.xsl out.xml Error occurred while processing msdsxsl.xsl: file:/C:/MSDS/msdsxsl.xsl: XSL-1009 : (Error) Attrib