Incremental Agg

I have a compressed and partitioned cube on 10.2.0.4 (AWM 10.2.0.3 A). I have a basic measure QC which I would like to clear based on various criteria (custom) and then repopulate.
So, I have written a DML program which using the custom criteria and loads the values (all leaf's) from the facts table to CUBE_PRT_TOPVAR(CUBE_PRT_MEASDIM 'QC') variable.
Then I would like to aggregate just for the given partition using the default aggmap to the cube. I run the following command:
allstat
aggregate CUBE_PRT_TOPVAR(partition p1) using Default_agg_map
(I arrive to the appropriate partition based on some custom criteria).
This aggregation seems to take forever. I thought aggregating on a single partition should speed up the whole process. In order to debug further, I re-ran the dataloading program and then I removed all data from the facts (view) and refreshed the cube (full refresh) using AWM and it took less than 3-5 min to autosolve all partitions. Can anybody point me to some insights on what I am doing wrong ?
Thanks in advance,
Swapan.

Thanks Stuart for the response.
I will fill in the details from my trace very soon. Regarding the criteria, I have a dimension called CLIENTS ( among 6) which has the following LEVELS:
ALL_CLIENTS
CLIENTS
PRODUCTS
I partition my cube at CLIENTS level of this dimension as it gives me parallelization at an individual client level. I need to dump and reload specifically for a particular product of a given client. This can be achieved in many ways but I am looking to do this at DML layer.
So, in essence:
limit CLIENTS to 'PRODUCT_A'
clear status from (MeasureName)
This clears all the data for the required cut. Now, I need to load the data just for this cut and thats where I am running into issues.
I do the following:
1. Load
SQL_LOAD_PRG to the base measure
2. Agg
allstat
& (joinchars('aggregate CUBE_PRT_TOPVAR(partition ',partition(CUBE_PARTITION_TEMPLATE (CLIENTS 'PRODUCT_A')),') using DefaultAggMap'))
Swapan.

Similar Messages

  • Design Aggrigation for ASO cubes

    Hi All,
    We are 12 ASO cubes in each server and each cube have minimum 5-6 months data .Normal aggregation data preview is taking lot of time .So we are planing to do design aggregation .We have a small doubt about this aggregation like
    **1)its going to be do every time we load the data into cubes or once we do this it will create automatically**
    **2)Suppose this month we have loaded 2 months data into cubes after that we perform design aggregation .Then again next month we load the 2 months data into cubes . again we need to do total 4 months design aggregation or is there is any method we can do partial design aggregation or incremental design aggregation .i.eLike already we have done the design aggregation so i want to do only next two months not total 4 months **
    Kindly let me know if any automation process available for this .
    Thanks,
    Vikram
    Edited by: Vicky143 on Jan 13, 2010 10:00 PM
    Edited by: Vicky143 on Jan 13, 2010 10:02 PM

    Hi Vikram,
    Which version are you using?
    1) Do you reset the cube(clear the data) whenever you re-load your cube?
    If yes, you can't expect your earlier aggregations be still there. However, if you've saved your agg. selections & outline is more/less the same, you can materialize the aggs. by using the saved script.
    If your load is an incremental one, the changes that take place to your outline matters a lot as they may invalidate your previous agg. selections.
    2) As more Lev0 data starts flowing in, you've to periodically monitor for the trade-off b/n agg. time/space and performance requirements. The only thing I know about incremental agg. for ASO is via enabling user query tracking & doing the necessary aggs.
    Visit this link to know how to automate-
    [execute aggregation selection|http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/maxl/ddl/statements/execaggsel.htm] !
    [execute aggregate process|http://download.oracle.com/docs/cd/E10530_01/doc/epm.931/html_esb_techref/maxl/ddl/statements/execagg.htm]
    Let us know incase you've any questions.
    - Natesh

  • Incremental Dimension Build

    Hi,
    We have a BSO cube in Essbase in which we have a custom dimension "Product" which has around 30K members, depending on the requests from the business users we need to rebuild this hierarchy with the changes that come in the new files and they may contain
    - New member
    - Removed old members
    - Members moved across different parents within the same dimension
    Till now we have been deleting the whole dimension hierarchy manually and update the outline with the new file but due to this whole data needs to be reloaded, what can be the best possible option that we take in the rule file or any other option so that we don't need to delete the existing hierarchy manually and then reload the whole data.
    I want to know if we have something like incremental dimension build which would take care of all the changes and we would be saved from the manual effort that we need to apply.
    Any suggestions would be really helpful.
    --XAT
    Edited by: XAT on Dec 4, 2010 5:20 PM

    Well you could test it to assure yourself and you should, but if the members are retained, then the data associated with it will be saved. One note, if you are moving members in hierarchies, then you will need to re-agg the database after the dim build

  • I am trying to restore my catalog having previously done a back up to an external hard drive and subsequently an incremental backup. I am using Photoshop Elements 11 and the only option given in the restore procedure is to open a .tly file.

    I have done this but now the restore function is asking for yet another file, which i assume to be the original back up, but that is the only .tly file since the only other relevant file appears to be called catalog.buc but that is just not visible when using the restore function? How do I continue from here with this restoration of my catalog?

    Martin_Had a écrit:
    Thank you Andaleeb. I appear to have an old backup of a year ago, and a more recent full back up plus an incremental backup.
    Regrettably I don't really understand what is going on because firstly the restore does not complete its cycle so I cannot see what that backup file contains and secondly all I have read would suggest that the .tly file is the full backup and the catalog.buc file is the incremental backup. For the present, the catalog shows the photos for 2014 which makes me think I might have backed from the old back up file.
    I am minded to create another catalog and try again.
    Any views on what I can do?
    A backup (full or incremental) is a folder, not a file. It contains renamed pictures file copies as well a copies of the files and subfolders of the original catalog. The catalog.buc is a renamed copy of the database of your original catalog while the backup.tly. That backup.tly file contains the information to restore the renamed pictures where you decide, the original location or a new custom one. You can't do anything with the backup yourself, only the restore process can do the job if it finds the backup.tly file. In the case of an incremental backup, you have to tell the restore process where to find the incremental backup folder; it finds the backup.tly file in that folder and finds what is to restore there; then it asks you for the previous backup folder (in your case the full backup); you then browse to that full backup folder so that the restore process can find find the backup.tly there; the restore then deals with the rest of the files to restore.

  • Regarding REFRESHING of Data in Data warehouse using DAC Incremental approa

    My client is planning to move from Discoverer to OBIA but before that we need some answers.
    1) My client needs the data to be refreshed every hour (incremental load using DAC) because they are using lot of real time data.
    We don't have much updated data( e.g 10 invoices in an hour + some other). How much time it usually takes to refresh those tables in Data wareshouse using DAC?
    2) While the table is getting refreshed can we use that table to generate a report? If yes, what is the state of data? Stale or incorrect(undefined)?
    3) How does refresh of Fin analytics work? Is it one module at a time or it treats all 3 modules (GL, AR and AP) as a single unit of refresh?
    I would really appreciate if I can get an answer for all the questions.
    Thank You,

    Here you go for answers:
    1) Shouldn't be much problem for a such small amt of data. All depends on ur execution plan in DAC which can always be created as new and can be customized to load data for only those tables...(Star Schema) ---- Approx 15-20 mins as it does so many things apart from loading table.
    2) Report in OBIEE will give previous data as I believe Cache will be (Shud be) turned on. You will get the new data in reports after the refresh is complete and cache is cleared using various methods ( Event Polling preferred)
    3) Again for Fin Analytics or any other module, you will have OOTB plans. But you can create ur new plans and execute. GL, AR, AP are also provided seperate..
    Hope this answers your question...You will get to know more which going through Oracle docs...particular for DAC

  • Is there a routine one can use to shift the column of data by one each time the loop index increments? In other words, increment the columns that the data is being saved by using the index?

    The device, an Ocean Optics spectrometer in columns of about 9000 cells.I'm saving this as a lvm file using the "write to measurement file.vi". But it doesn't give me the flexibility as far as I can tell.
    I need to move the column by the index of the for loop, so that when i = n, the data will take up the n+1 column. (the 1st column is used for wavelength). How do I use the "write to spreadsheet file.vi" to do this? Also, if I use the "write to spreadsheet file.vi", is there a way one can increment the file name, so that the data isn't written over. I like what "write to measurement file.vi" does.
    I'd really appreciate any help someone can give me. I'm a novice at this, so the greater the detail, the better. Thanks!!!

    You cannot write one column at a time to a spreadsheet file, because a file is arranged linearly and adding a column would need to move (=read and rewwrite elsewhere) almost all existing elements to interlace the new data. You can only append new rows without having to touch the already written data.
    Fields typically don't have fixed width. An exception would be binary files that are pre-allocated at the final size. In this case you can write columns by setting the file positions for each element. It still will be very inefficient.
    What you could do is append rows until all data is written, the read, transpose, and write back the final file.
    What you also could to is build the final array in a shift register and write the entire things to file at once after all data is present.
    LabVIEW Champion . Do more with less code and in less time .

  • SYSDATE is not getting incremented in the parameter - Scheduler

    Hi All
    I have used the query 'SELECT SYSDATE FROM DUAL' for a parameter in concurrent request. If I execute this concurrent as a single request, then it returns exact sysdate (current date) in a parameter.
    If i schedule this program for daily execution using scheduler, then the sysdate is always giving value of the date when i have configured the scheduler (even I have enabled the check box increment date parameters each run).
    For example,
    My Requirement :The scheduler configured such a way that it should run everyday.Whenever it runs my query has to return the current date.
    Current Issue: I schedule the scheduler on 23-Dec-2008. If it runs on 24-DEC-2008,it returns as '23-DEC-2008', If it runs on 28-DEC-2008 also,it returns as '23-DEC-2008.However If I execute this as a single request, it works fine.
    Can anyone help me please,,It is very urgent.
    Thanks in Advance

    Hi,
    This forum is dedicated to the Oracle Scheduler which uses the dbms_scheduler package.
    It looks like you might be using a different scheduler since the Oracle Scheduler does not allow using a query as a parameter and does not have an "increment date parameters" checkbox.
    If you are using the Oracle Applications Scheduler, you should ask this question on the Applications forum here
    http://forums.oracle.com/forums/category.jspa?categoryID=3
    If you are using the Enterprise Manager Scheduler you should ask on the Grid Control forum here
    Enterprise Manager
    If you are using the Oracle Scheduler you should post the code being used to create your job which may be found by using the "Show SQL" button on the create job webpage. This should include a call to dbms_scheduler.create_job .
    Thanks,
    Ravi.

  • InDesign CS6 Text Wrap increments not accurate.

    My work jumped from CS4 to CS6 so I don't know if CS5 had the same issue. Anyway. Text wrap (around bounding box) seem to now be very limited. For example, If I have, say a .25 text wrap on the bottom of a object, but I need to change it slightly to .245 the text does not move. The box shows the change, but the text below just doesn't follow suit. CS4 I was able to make tiny incremental changes to the wrap, but it is not working for me now in CS6. Any ideas, answers?

    Suggest you post product specific questions in the relevant product forum.
    Try posting at http://forums.adobe.com/community/indesign/indesign_general

  • Refresh browser causes to increment results

    Hi,
    I have a servlet that is using a database to query results from a survey. The survey has 3 questions and each one of them has 4 possible answers. The query runs fine and displays in the web browser, but once I do a refresh some of the data increments its value at a constant rate.
    Here is the code:
    import java.io.*;
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.sql.*;
    import java.util.StringTokenizer;
    public class Survey extends HttpServlet {
    private Connection con = null;
    private Statement stmt = null;
    private String url = "jdbc:odbc:survey";
    private String table = "results";
    private int numQues = 3;
    private int [] numAns = {4,4,4};
    private int num = 0;
    public void init() throws ServletException {
    try {
    // loading the jdbc-odbc bridge
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    // making a connection
    con = DriverManager.getConnection(url,"anonymous","guest");
    } catch (Exception e) {
    e.printStackTrace();
    con = null;
    public void doPost(HttpServletRequest req, HttpServletResponse res)
    throws ServletException, IOException {
    String [] results = new String[numQues];
    for (int i=0;i<numQues;i++) {
    results = req.getParameter("q"+i);
    // test if the user has answered all the question
    String resultsDb = "";
    for (int i=0;i<numQues;i++) {
    if (i+1!=numQues) {
    resultsDb += "'" + results + "',";
    else {
    resultsDb += "'" + results + "'";
    boolean success = insertIntoDb(resultsDb);
    // print a thank you message
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    if (success) {
    buffer.append("Thank you for participating!");
    else {
    buffer.append("An error has occurred. Please press the back button of your browser");
    buffer.append(" and try again.");
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void doGet(HttpServletRequest req, HttpServletResponse res)
    throws IOException {
    // get info from file
    res.setContentType("text/html");
    PrintWriter output = res.getWriter();
    StringBuffer buffer = new StringBuffer();
    buffer.append("<HTML>");
    buffer.append("<HEAD>");
    buffer.append("</HEAD>");
    buffer.append("<BODY BGCOLOR=\"#FFFFFF\">");
    buffer.append("<P>");
    try {
    stmt = con.createStatement();
    // find the number of participation
    for (int i=0;i<1;i++) {
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    while (rs.next()) {
    rs.getInt("q"+i);
    num++;
    // loop thru each question
    for (int i=0;i<numQues;i++) {
    int [] results = new int[num];
    String query = "SELECT q" + i + " FROM " + table;
    ResultSet rs = stmt.executeQuery(query);
    int j=0;
    while (rs.next()) {
    results[j]=rs.getInt("q"+i);
    j++;
    //call method
    int[] total = percent(results,4);
    buffer.append("Question" + i + ":<BR>");
    for (int k=0;k<4;k++) {
    buffer.append(" > Answer " + k + ":" + total[k]);
    buffer.append("<BR>");
    buffer.append("\n");
    } catch (SQLException ex) {
    ex.printStackTrace();
    // display the results
    buffer.append("</P>");
    buffer.append("</BODY>");
    buffer.append("</HTML>");
    output.println(buffer.toString());
    output.close();
    public void destroy() {
    try {
    con.close();
    } catch (Exception e) {
    System.err.println("Problem closing the database");
    public boolean insertIntoDb(String results) {
    String query = "INSERT INTO " + table + " VALUES (" + results + ");";
    try {
    stmt = con.createStatement();
    stmt.execute(query);
    stmt.close();
    } catch (Exception e) {
    System.err.println("ERROR: Problems with adding new entry");
    e.printStackTrace();
    return false;
    return true;
    public int [] percent(int [] array, int numOptions) {
    System.out.println("==============================================");
    int [] total = new int[numOptions];
    // initialize array
    for (int i=0;i<total.length;i++) {
    total=0;
    for (int j=0;j<numOptions;j++) {
    for (int i=0;i<array.length;i++) {
    System.out.println("j="+j+"\t"+"i="+i+"\ttotal[j]="+total[j]);
    if (array==j) {
    total[j]++;
    System.out.println("==============================================");
    return total;
    Thanks!

    Hi,
    I do encounter similar problem. The root cause was that URL at the location bar of the browser still pointing to the same servlet (which handle the HTTP POST/GET request to update the database) after the result was returned by the servicing servlet.
    For example, in your case the "Survey" servlet URL.
    The scenario is like this.
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Result in HTML via HttpServletResponse)---> Browser (display the result)
    Note that after step 2, the browser's location bar value still pointing to the Survey Servlet URL (used in the HTML form's action value). So if the refresh is performed here, the same action will be repeated, as the result, 2 identical records have been created in your database table instead of one.
    A way to work around this is to split the servlet in to two, one performing the update of database and one responsible to display the result.
    Thing become as follow:
    1.Browser (HTML form) ---HTTP post ---> Survey Servlet (service & update database)
    2.Survey Servlet -- (Redirect the Http request)---> Display Result ServletBrowser
    3.Display Result Servlet --(Acknowledgement in HTML via HttpServletResponse) ---> Browser (display the result)
    Note that now the browser's location bar will point to the Display Result Servlet. Refresh action will be posted to Display Result Servlet which will not create duplication of database update activity.
    to redirect the request to another servlet, use the res.sendRedirect(url); where res is the HttpServletResponse object and url is the effective url of calling the target servlet.
    Hope that this help.

  • Can't resume incremental backups

    I fist finished moving and after a few weeks of sitting in a box I set up my Time Capsule to resume operation as my backup device and wireless router. Unfortunately, I must have made a mistake during setup because now my MacBook Pro is treating the TC as a completely new device, seeking to make a full backup of its entire hard drive rather than the first incremental backup since before the move. Is there some way of forcing my MBP to recognize the existing backups on the TC and pick up where it left off? I'd really appreciate some help because I've got some valuable old files trapped in those old backups. Thanks!

    I think i have an idea of why this might be happening now but don't know how to fix this. any help would be appreciated.
    so i checked this directory
    /Volumes/Time Machine Backup 2/Backups.backupdb
    and there were two directories. a new one called "computer" and the old folder with my backups "computer 2", so Time Machine must be looking at the "computer" folder. How can I force Time Machine to look at the other older folder? I tried removing the "computer" folder but it won't let me rename "computer 2" to "computer", even from the terminal using sudo and admin privileges. Is there a reason why I'm not allowed rename files on my own machine when I'm the admin?
    thanks

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • ACS 5.3 Incremental BackUp Issue

    We have ACS 5.3 and it turns back to "Off" by itself and doesn't perform incremental backup. I turned it "On" several times, but it keeps on turning "Off"
    Version : 5.3.0.40.8 in VM Enviroment.

    Hi Kumar2000,
    I tried to answer anas query here, you may want to go through the same.
    https://supportforums.cisco.com/discussion/12142716/acs-inceremental-backup-turns
    Regards,
    Jatin Katyal
    *Do rate helpful posts*

  • ACS 5.3 incremental backup error

    Hi ,
    I have ACS 5.3 that recently having problems with the incremental backup.
    The error is : on demand back failed
    and the details is: SQL Anywhere backup utility connection error: insufficient system resources- failed  to allocate a SYSV semphorenull .
    I mean come on.... and I did not find this error on cisco website.
    The ADE.log file is not showing errors/details related to this. Atttached are the files showing the errors
    Have anyone faced this problem before? Ideas? Anything?
    Regards,
    George

    Hi George:
    with 5.3 I experienced many issues including the incremental backup does not work. whenever I set it to "ON" next time the scheduled backup comes It fails and set itself back to "Off'. I did not get same message you get though.
    I finally did two things:
    - upgraded to latest patch.
    - moved the log collector from the primary to the secondary.
    Now things are fine for about 1 month without issues.
    Regarding your issue, I think it could be related to resource issue as mentioned in the message.
    What is the current DB size that you have?
    Note that the message is misleading (messages I got with my ACS are the same) because they mention incremental backup in the message title and then say on-demand full backup failed!
    So, you have to specify yourself if the issue with the incremental backup or the full backup?
    HTH
    Amjad
    Rating useful replies is more useful than saying "Thank you"

  • User Profile Incremental Sync does not views changes in profiles.

    User Profile Incremental Sync does not views changes in profiles. I make full sync - all ok. then I change the phone number for example, and then start incremental sync and nothing moves to AD. 
    All the stages of FIM sync just state 0 records to process. Looks like it does not see the changes
    1.profile property is set up to sync with AD attribute (with EXPORT).
    2.no errors in Windows logs or FIM UI client, nor in sharepoint UI
    3. sharepoint server 2010 SP1, "CU June 2012"  14.0.6123.5002
    4. I have just reprovisioned sync service, set up AD connections and property-attribute relations. before
    there was this problem http://social.msdn.microsoft.com/Forums/en-US/cb2b8aeb-d1b6-49a6-a788-2491ca45308a/critical-error-6398-userprofileimportjob-problem-profile-sync-malfunction?forum=sharepointadminhttp://social.msdn.microsoft.com/Forums/en-US/cb2b8aeb-d1b6-49a6-a788-2491ca45308a/critical-error-6398-userprofileimportjob-problem-profile-sync-malfunction?forum=sharepointadmin,
    which I resolved by clearing timer service cache.
    Help with advice, urls or solurions please..

    Hi  VlH,
    According to your description, the issue can be caused by the Profile or Profile Values table.
    You can check the event log and ULS log to see detail error message.
    To check event log, click the Start button and type “Event Viewer” in the Search box.
    For SharePoint 2013, by default, ULS log is at      
    C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS
    For your issue you can refer to the similar thread:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/bc3cd2ff-dc12-4223-a80b-e4cae616e861/user-profile-sync-doesnt-update-the-profiles-from-ad?forum=sharepointadminlegacy
    Also a good blog for user profile synchronization:
    http://www.harbar.net/articles/sp2010ups.aspx
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • User Profile Service - User Profile Incremental Synchronization Timer job stuck at 33% Status: Pausing

    User Profile Service - User Profile Incremental Synchronization Progress: 33% Status: Pausing
    It has been almost 15 days.
    Both User Profile Service and User Profile Synchronization Service are in Started state and FIM service also starting 
    I tried clearing sharePoint config cache.
    I also restarted the sharepoint timer service.
    I tried almost everything that is on Internet but nothing helped me.
    Is there any other way to solve the issue as I was struck on production server (ASAP) 
    In synchronization serivce manager status of MOSS_DeltaImport is Inprogress from past 2 days  
    Best Regards.

    Hi,
    Please follow the steps in the link below to clear the configuration cache.
    http://blogs.msdn.com/b/jamesway/archive/2011/05/23/sharepoint-2010-clearing-the-configuration-cache.aspx
    Here is a similar thread for your reference:
    https://social.technet.microsoft.com/Forums/en-US/beaa852c-6f40-428a-b97c-20722864e045/user-profile-service-user-profile-incremental-synchronization-timer-job-stuck-at-88-status?forum=sharepointadminprevious
    Or try to clear the file system cache on all servers in the server farm on which the Windows SharePoint Services Timer service is running. Microsoft has provided a step by step procedure on clearing file system cache from the SharePoint front-end servers
    in this kb article.
    You can also see the ULS logs and check error messages.
    http://sharepointlogviewer.codeplex.com/
    Best Regards
    Dennis Guo
    TechNet Community Support

Maybe you are looking for

  • Installing Windows 7 on Mac - no bootable device error

    Hello, I recently purchased Windows 7 Ultimate and am trying to install it on my MacPro running Lion and Bootcamp 4.0.  I spent the last couple of days looking online at other fourms and YouTube videos to no luck.  I installed Windows 7 and can get i

  • Edit a TopLink Expression

    Hi. I am design a application in Jdeveloper 11g, but i have a problem with a try to edit a toplink expression a first argument, in the button edit I click in this button but nothing happend. Why happend this, i need edit this first argument but i can

  • Overheating iMacs (Hacked Solution)

    After 1 year and 4 months my hard drive failed. My Apple service center was full of iMacs for repair, especially the 24 inch model. The problem, I was told, was overheating. I have a new drive installed and have been investigating what solution there

  • Can't stay signed in

    Only in Safari on my mac, I cannot stay signed in after I quit Safari. At least Firefox saves my password, so I don't have to type it in every time I quit and then relaunch Safari. And I don't like to keep Safari open because it is a memory hog. Is t

  • JAR file isn't found using java -cp

    Hi all! This works: META-INF/MANIFEST.MF: Manifest-Version: 1.0 Class-Path: TSA.JAR Main-Class: p.Start -which makes my jar file executable (as expected) I need to be able to make changes in the classpath though, so I can change TSA.JAR to e.g. TSA2.