Ever-expanding array

Is it possible to implement an ever-expanding array (without the help of vector and arraylist java utils) starting from:
String[] wordList = new String[1];And expands whenever a new word has been added to the array. Every element of the array must store a word and there should be no null values.. Is this even possible?

Nope, I'm afraid not. If you need an expanding array, try an ArrayList.

Similar Messages

  • How to handle ever expanding array?

    Hi,
    I'm looking for help with a temperature control experiment...
    I have a program to aquire data from 24 channels through a while loop.
    I am displaying the data in a waveform graph. Currently, I am using
    shift registers and build array to add new data to the graph every time
    through. This worked fine while I was prototyping the hardware, but now
    I'd like it to run for days on end; as everyone can imagine, the array
    keeps getting bigger and the build array function keeps hogging memory
    until the whole thing crashes.
    I'm guessing that I should be writing the data to a file every so
    often, which would keep the size manageable, and would allow me to know
    the size of the array and could use other opperations rather than build
    array. The array would need to stay pretty small so the write operation
    didn't slow down the loop too much. The program is pretty slow, since
    I'm limited in how quickly I can
    change temperatures. Currently, I go through all 24 probes about every
    2.5 seconds (this is set just by how long it takes the routine to read
    each temperature and calculate the control parameters). If it takes a
    little bit longer each time through, that's not a huge problem, but a
    big pause every so often would make the control routine behave oddly.
    What the save-to-file option takes away, however, is the ability to
    walk up to the machine in the morning and see at a glance how well it
    regulated the temperature during the night. I've thought about keeping
    some reduced set of the data (every minute or so) to give me a
    low-frequency glance at the data, but it seems like this will simply
    postpone the inevitable memory loading. I don't know if I could have a
    separate program that opened up the files that the control routine
    wrote, but this would probably be too much for the poor old PII machine
    that is running this.
    Does anyone have a simple solution to this probelm? Failing that, a
    complex solution might do, although it might get beyond my programming
    abilities.
    I only have LV 6.0, so I can't see examples saved for versions later than this.
    Thanks for any suggestions.
    cheers,
    mike

    Mike,
    Writing data to an array every so often is definitely what you want to do. Then if power fails or the program shuts down for some reason, the data to the last save is still in the file. At the 2.5 second sample rate you only get ~35000 points per channel per day, so data files will not get too large. Decide how often to write by how valuable the data is: What is the penalty for losing data for the last minute, hour, day...?
    Another point is that the panel display only occupies a few hundred pixels in each direction. Plotting more points than that is meaningless. If you plot one point per minute you will have 1440 points per day (per channel). You could always show the most recent 24 hours using a circular buffer. With more programming effort, you could keep previous days plots available for selection by the user or read them back from the file.
    Lynn

  • How to redefine (expand) array at runtime?

    If I have an array
    MyArray = new Object[100];
    how can I redefine MyArray -- AT RUNTIME -- to hold 200 Objects instead of 100?
    Is that even possible in Java?
    Thanks,
    Matthew

    I think you're right -- but I should be more specific in my example:
    MyTable = new Object[100][NUM_FIELDS];
    Where "100" is the number of records in the database.
    What if I need to add a record to the database?
    Now ArrayList might be a good solution, but how would I declare one in my case? The left-most index is the one I'm trying to make "dynamic" via ArrayList, whereas the right-most index is always the same number.
    Any code samples, or pointers to a good explanation?
    I found the Doc page for ArrayList:
    http://java.sun.com/j2se/1.5.0/docs/api/java/util/ArrayList.html
    but it doesn't give any examples and doesn't mention multi-dimensional arrays.
    Matthew

  • Ever-expanding file w/ "Deploy to EAR file"

    If I don't delete any pre-existing EAR file in JDeveloper prior to choosing "Deploy to EAR file" for my application, JDeveloper seems to include the existing EAR file in the new EAR file, such that my EAR file grows with each deployment each time, from 60K to 120K to 240K to ... 10, 20 megabytes, etc. I can only get a clean "deploy to ear file" (chosen from right-clicking on the .deploy file in the Navigator) if I first delete the existing ear file. Is there a way to get JDeveloper to not do this, in case I forget to delete from disk first? Thanks.

    I am exporting using the default settings for QT conversion for Broadband high quality. I didn't mess with any settings other than that. Any ideas?

  • Will att&t ever expand its lines

    Hi I live outside Cleveland, OK on 36500 Rd, there is Internet service about 3/4 of a mile from my house, but everyone beyond that first 3/4 mile have to have satellite Internet, which you might as well not have any Internet at all, just wondering what we could do to get the land lines extended to cover the other households on the road?

    I know this is no help, just someting I'm adding to the topic: There are cases where folks living in a neighborhood living on one side of a road can get service, while those litterally just across the street, can not. Not that is really ads anything, but an article about att service: http://arstechnica.com/business/2015/06/internet-nightmare-att-sells-broadband-to-your-neighbors-but-not-to-you/

  • Multiple libraries on iTunes 7

    I currently have 2 libaries on one desktop & want to start a 3rd - my problem is switching between them. I hold down the shift key when opening iTunes, but when prompted to choose library am faced with an ever expanding array of folders within folders & have no idea which one is the last one used & therefor the one I want to open now!Is there some way to simplify things & consolidate all these files & folders ? I am using windows XP SP2

    I think there are a large number of us who are in the "for some reason" category: we want to have our libraries split across multiple HD's.
    Here's my exact scenario:
    I have a small portion of my library stored locally on my PowerBook for when I am on the road (~30GB). I have the bulk of my music collection stored on an external drive at home (~160GB). I have another set of tracks that are works in progress that my band is in the process of writing/recording, stored on a 2nd external drive at our studio (~10GB). I want to be able to plug in any of the external drives, and have access to that content whenever I am at home or at the studio, without having to change the local library stored on my PowerBook.
    Consolidating the library does not help at all in this scenario - not only does the complete library not fit on my PB's hard drive, but I also don't want the reverse, which is to consolidate to one external drive which I'd then have to carry around everywhere. iTunes should be smart enough to recognize different library files concurrently, and store the import/organize settings per library. I guess that's too much to ask...

  • Triggering a 6024E into data acquisition from start and end number of finite pulse generator from a 6602

    My motion control system is driven by a 6602 I wanted to acquire analog current (to a voltage via I/V converter) from a 6024 AI when:
    (1) At the start of the pulse generation
    (2) Stop at the end of the pulse generation
    (3) Read every possible data between and stream it on the harddisk
    (4) Option to skip at regular intervals to reduce amount of data accumulation
    Anyone have some suggestions? What I did try and attempted was to "loop an AI from a triggered intermediate Analog Input VIs" this is rather erratic!My question for the analog input software are:
    (1)Can you trigger an AI to start a continuous acquisition?
    (2) How do you do AI from a "start" p
    ulse train to "end" pulse train?
    (3) How do you manage time for File I/O meanwhile doing (1) and (2) above?
    Bernardino Jerez Buenaobra
    Senior Test and Systems Development Engineer
    Test and Systems Development Group
    Integrated Microelectronics Inc.- Philippines
    Telephone:+632772-4941-43
    Fax/Data: +632772-4944
    URL: http://www.imiphil.com/our_location.html
    email: [email protected]

    "(1)Can you trigger an AI to start a continuous acquisition?
    (2) How do you do AI from a "start" pulse train to "end" pulse train?
    (3) How do you manage time for File I/O meanwhile doing (1) and (2) above?"
    Answer 1 and 2)
    Yes you can, This VI is part of the search examples that ships with LV:
    "Continuous Acquisition & Graph Using Digital Triggering and External Scan Clock
    Demonstrates how to continuously retrieve data from one or more analog input channels using an external scan clock when a digital start trigger occurs."
    Go to Search Examples in the Help/Contents of LV. Then pick I/O Interfaces/Data Acquisition (DAQ)/Analog Input/Triggering an Acquisition/Triggering a Continuous Acquistion.
    This VI is appropriate ifs you want to clock the DAQ with some clock (external scan clock) other than the on board clock.
    The start trigger required by the DAQ card will be a TTL signal attached to the PFI0/TRIG1 pin of you DAQ card (via whatever signal conditioning you have).
    The external clock also needs to be TTL and is attached to the PFI7/STARTSCAN pin.
    You tell it which AI channel to use and wire that up appropriately.
    All of this stuff is in the context help for the VI.
    Answer 3)
    You have a misconception of how the acquisition makes its way into a file on the hard disk. You don't really "stream to disk." The VI above will run a buffered acquisition. The DAQ card sets up a buffer that fills with the data continously. When you use the VI you set up how the buffer is configured, you will see controls for buffer size, number of scans to read at a time, etc. The acquisition runs data into the buffer continously and reads from the buffer are a parallel process where chuncks of the buffer are extracted serially. You can end up with a scan backlog where the reads are falling behind the data. Making the buffer bigger helps. All of this is limited by the sampling capability of the DAQ card. 200kSamples/second is for one channel of AI. Divide by 2 for 2 channels. etc.
    The short answer is that the VI and DAQ card manage the file I/O for you.
    The VI above writes the scan out to the waveform chart on the front panel. You will want to wire that data out from the AI Read Sub VI to a spreadsheet file in tab delimited form or similar.
    Look inside the VI block diagram. There is a While loop containing the AI Read. Every time the loop runs (unless you hit STOP), the AI Read plucks the specified # of scans from the buffer (starting from the last unread datum). If you wire the double orange wire from AI Read out of the loop (and set the tunnel for Auto Indexing) the Vi will build another buffer in memory that is a series of AI Reads appended to each other, a sequencial record of the acquisition. Here you put in a Write to Spreadsheet VI. This is in the Functions Pallete, it is the first VI in the File I/O Pallette (icon looks like a 3 1/2 disckette).
    There is more to it than this. The spreadsheet Write is 1D or 2D only. By running the array out of the while loop with auto indexing enabled, you create a 3D array. (If you turn off auto indexing you will only get the last array performed by the AI Read.) You will need to create a new array withe the pages of the array placed serially into a 2d array. I don't think I want to get into that. It isn't that bad to do, but you should get some time messing with arrays on your own, then you will see how to do it yourself. One solution is to only run the While loop once with a buffer big enough to hold the whole acquisition, or in other words no loop at all. I don't know how many scans you want. The other thing is by wiring that AI Read out of the loop you are creating an array that must be dynamically resized as the loop runs. Kind of a no no. Better to know what you are acquiring size wise and letting the VI set up buffers and array space that does not need to be changed as the program runs.
    Caveat: I am fairly new to this and I could be wrong about the arrays and buffers and system resources stuff. However, I have done essentially what you are trying to do succesfully, but not "continuously." I limited the AI Read to one large read that gets what I need.
    "Continously" is usually reserved for screen writes as in the VI above. The computer is not required to continually allocate space for an ever expanding array to be written as a file later. Each time this VI runs the While Loop the data goes to the waveforn chart. Each new AI read overwrites the previous one. Sort of like if you turn the Auto Indexing off. When the VI stops, the waveform chart on screen would show the same data as was written to the spreadsheet.
    Feel free to email me directly: [email protected]. I will help as I can.
    Mike Ross

  • JAVA Array sizes - how to expand + See nice example code

    Hi, We are returning tables of VARCHAR, NUMBERS, BOOLEANs and
    the like to Java using PL2JAVA and Oracle Sessions. Problem I am
    having is the size of the Array's are undetermined, but it
    appears these must be set before calling package - if package
    returns more rows in tables than the calling Java arrays have,
    then it fails. If we assign too many then I guess we are using
    memory which will be wasted - important when this system could
    have 100 simultaneous users/connections.
    Has anyone got any advice - people may find the sample code below
    useful at worse.
    PACKAGE INTERFACE:
    FUNCTION ssfk_get_metadata.ssfp_get_2metadata RETURNS VARCHAR2
    Argument Name Type In/Out
    Default?
    P_USER_PERSON_ID NUMBER(10) IN
    P_SELF_SERVE_APPLICATION VARCHAR2 IN
    PT_DATA_SOURCE TABLE OF VARCHAR2(60) OUT
    PT_PROMPT TABLE OF VARCHAR2(30) OUT
    PT_DATA_TYPE TABLE OF VARCHAR2(30) OUT
    PT_DATA_LENGTH TABLE OF NUMBER OUT
    PT_DECIMAL_PLACES TABLE OF NUMBER OUT
    PT_MANDATORY_IND TABLE OF VARCHAR2(1) OUT
    PT_UCASE_IND TABLE OF VARCHAR2(1) OUT
    PT_DISPLAY_ONLY_IND TABLE OF VARCHAR2(1) OUT
    PT_WEB_LINK_CD TABLE OF VARCHAR2(10) OUT
    P_TABLE_INDEX BINARY_INTEGER OUT
    P_MESSAGE_NUM NUMBER(5) OUT
    Code example:
    public static String getApplicationMetaData (String
    strPersonID, String strApplication, Session sesSession)
    String strClientString = "";
    if (sesSession==null)
    return "CONNECTION ERROR";
    else
    Double dblUser = new Double(strPersonID);
    //initialising of IN parameters
    PDouble pdbUserPersonId = new PDouble
    (dblUser.intValue());
    PStringBuffer pstSelfServeApplication = new
    PStringBuffer (strApplication);
    //initialising of OUT parameters
    PStringBuffer pstDataSource[] = new PStringBuffer
    [intArraySize];
    PStringBuffer pstPrompt[] = new PStringBuffer
    [intArraySize];
    PStringBuffer pstDataType[] = new PStringBuffer
    [intArraySize];
    PDouble pdbDataLength[] = new PDouble [intArraySize];
    PDouble pdbDecimalPlaces[] = new PDouble
    [intArraySize];
    PStringBuffer pstMandatoryIND[] = new PStringBuffer
    [intArraySize];
    PStringBuffer pstUCaseIND[] = new PStringBuffer
    [intArraySize];
    PStringBuffer pstDisplayOnlyIND[] = new PStringBuffer
    [intArraySize];
    PStringBuffer pstWebLinkCode[] = new PStringBuffer
    [intArraySize];
    PInteger pinTableIndex = new PInteger (0);
    PDouble pdbMessageNum = new PDouble (0);
    //initialising of RETURN parameters
    PStringBuffer pstReturn = new PStringBuffer("N");
    //setting the array items sizes
    for (int i=0; i<pstDataSource.length; i++)
    pstDataSource[i] = new PStringBuffer(60);
    pstPrompt[i] = new PStringBuffer(30);
    pstDataType[i] = new PStringBuffer(30);
    pdbDataLength[i] = new PDouble(-1);
    pdbDecimalPlaces[i] = new PDouble(-1);
    pstMandatoryIND[i] = new PStringBuffer(1);
    pstUCaseIND[i] = new PStringBuffer(1);
    pstDisplayOnlyIND[i] = new PStringBuffer(1);
    pstWebLinkCode[i] = new PStringBuffer(10);
    try
    strClientString = strClientString.concat ("001");
    ssfk_get_metadata ssfAppMetaData = new
    ssfk_get_metadata (sesSession);
    strClientString = strClientString.concat ("002");
    pstReturn = ssfAppMetaData.ssfp_get_2metadata
    (pdbUserPersonId, pstSelfServeApplication, pstDataSource,
    pstPrompt, pstDataType, pdbDataLength, pdbDecimalPlaces,
    pstMandatoryIND, pstUCaseIND, pstDisplayOnlyIND, pstWebLinkCode,
    pinTableIndex, pdbMessageNum);
    strClientString = strClientString.concat ("003");
    if
    (pstReturn.stringValue().equalsIgnoreCase("Y"))
    strClientString = strClientString.concat
    ("WORKED");
    return strClientString;
    else
    return "ERROR";
    catch (Exception e)
    return strClientString + "ERROR:" + e.getMessage
    Thanks for any assistance.
    null

    Play with Java Vectors. They are automatic expanding arrays!
    Just add elements and get them later. One thing that's tricky
    is that Vectors only store and return elements as Objects so you
    have to explicitly recast them.
    -dan
    Richard Leigh (guest) wrote:
    : Hi, We are returning tables of VARCHAR, NUMBERS, BOOLEANs and
    : the like to Java using PL2JAVA and Oracle Sessions. Problem I
    am
    : having is the size of the Array's are undetermined, but it
    : appears these must be set before calling package - if package
    : returns more rows in tables than the calling Java arrays have,
    : then it fails. If we assign too many then I guess we are
    using
    : memory which will be wasted - important when this system could
    : have 100 simultaneous users/connections.
    : Has anyone got any advice - people may find the sample code
    below
    : useful at worse.
    : PACKAGE INTERFACE:
    : FUNCTION ssfk_get_metadata.ssfp_get_2metadata RETURNS VARCHAR2
    : Argument Name Type In/Out
    : Default?
    : P_USER_PERSON_ID NUMBER(10) IN
    : P_SELF_SERVE_APPLICATION VARCHAR2 IN
    : PT_DATA_SOURCE TABLE OF VARCHAR2(60) OUT
    : PT_PROMPT TABLE OF VARCHAR2(30) OUT
    : PT_DATA_TYPE TABLE OF VARCHAR2(30) OUT
    : PT_DATA_LENGTH TABLE OF NUMBER OUT
    : PT_DECIMAL_PLACES TABLE OF NUMBER OUT
    : PT_MANDATORY_IND TABLE OF VARCHAR2(1) OUT
    : PT_UCASE_IND TABLE OF VARCHAR2(1) OUT
    : PT_DISPLAY_ONLY_IND TABLE OF VARCHAR2(1) OUT
    : PT_WEB_LINK_CD TABLE OF VARCHAR2(10) OUT
    : P_TABLE_INDEX BINARY_INTEGER OUT
    : P_MESSAGE_NUM NUMBER(5) OUT
    : Code example:
    : public static String getApplicationMetaData (String
    : strPersonID, String strApplication, Session sesSession)
    : String strClientString = "";
    : if (sesSession==null)
    : return "CONNECTION ERROR";
    : else
    : Double dblUser = new Double(strPersonID);
    : //initialising of IN parameters
    : PDouble pdbUserPersonId = new PDouble
    : (dblUser.intValue());
    : PStringBuffer pstSelfServeApplication = new
    : PStringBuffer (strApplication);
    : //initialising of OUT parameters
    : PStringBuffer pstDataSource[] = new PStringBuffer
    : [intArraySize];
    : PStringBuffer pstPrompt[] = new PStringBuffer
    : [intArraySize];
    : PStringBuffer pstDataType[] = new PStringBuffer
    : [intArraySize];
    : PDouble pdbDataLength[] = new PDouble
    [intArraySize];
    : PDouble pdbDecimalPlaces[] = new PDouble
    : [intArraySize];
    : PStringBuffer pstMandatoryIND[] = new
    PStringBuffer
    : [intArraySize];
    : PStringBuffer pstUCaseIND[] = new PStringBuffer
    : [intArraySize];
    : PStringBuffer pstDisplayOnlyIND[] = new
    PStringBuffer
    : [intArraySize];
    : PStringBuffer pstWebLinkCode[] = new PStringBuffer
    : [intArraySize];
    : PInteger pinTableIndex = new PInteger (0);
    : PDouble pdbMessageNum = new PDouble (0);
    : //initialising of RETURN parameters
    : PStringBuffer pstReturn = new PStringBuffer("N");
    : //setting the array items sizes
    : for (int i=0; i<pstDataSource.length; i++)
    : pstDataSource[i] = new PStringBuffer(60);
    : pstPrompt[i] = new PStringBuffer(30);
    : pstDataType[i] = new PStringBuffer(30);
    : pdbDataLength[i] = new PDouble(-1);
    : pdbDecimalPlaces[i] = new PDouble(-1);
    : pstMandatoryIND[i] = new PStringBuffer(1);
    : pstUCaseIND[i] = new PStringBuffer(1);
    : pstDisplayOnlyIND[i] = new PStringBuffer(1);
    : pstWebLinkCode[i] = new PStringBuffer(10);
    : try
    : strClientString = strClientString.concat
    ("001");
    : ssfk_get_metadata ssfAppMetaData = new
    : ssfk_get_metadata (sesSession);
    : strClientString = strClientString.concat
    ("002");
    : pstReturn = ssfAppMetaData.ssfp_get_2metadata
    : (pdbUserPersonId, pstSelfServeApplication, pstDataSource,
    : pstPrompt, pstDataType, pdbDataLength, pdbDecimalPlaces,
    : pstMandatoryIND, pstUCaseIND, pstDisplayOnlyIND,
    pstWebLinkCode,
    : pinTableIndex, pdbMessageNum);
    : strClientString = strClientString.concat
    ("003");
    : if
    : (pstReturn.stringValue().equalsIgnoreCase("Y"))
    : strClientString = strClientString.concat
    : ("WORKED");
    : return strClientString;
    : else
    : return "ERROR";
    : catch (Exception e)
    : return strClientString + "ERROR:" +
    e.getMessage
    : Thanks for any assistance.
    null

  • Expanding an Array and the volume

    As of now it doesn't seem possible to expand an array and the associated volume without reformating the entire array. Does anyone know of any work arounds?
    I added a new drive to an array. RaidAdmin sees that the array is now 1.46 TB, but the Mac OS X server 10.4 that the xraid is attached still sees 1.09 TB
    Are there any work arounds for this? Or any hope of OS X upgrade that might resolve this issue? Whats the use of an expandable array if you need to copy off all data, back it up, and then reformate in order to expand a volume.
    From apple.com/support
    http://docs.info.apple.com/article.html?artnum=301443
    "At this time, Mac OS X and Mac OS X Server do not recognize the capacity of an expanded Xserve RAID until after the expanded volumes have been reformatted."
    thanks for any help
    daniel

    I'm pretty sure you can expand the array and merge the slices without losing data.
    However, it would be absolutely foolish to do this without a full and proper backup. And in that case, I would strongly... STRONGLY... suggest that the best approach is just to dump the 4-disk RAID 5 and create a new 7-disk RAID 5. Faster and easier, and the OS won't have any problems recognizing or growing the volume... like it might with a volume you expanded.

  • Utilsing an array to fix a problem

    I have created the below array and I wish to utilise the elements within
    it. Basically I have to create a program that inputs a date say 12/06/65
    and outputs 12 june 65.
    The array is below
    public class MonthList
    private int nextFreeLocation;
         public MonthList()
    Month [] monthList = {new Month  ("January  ", 31),
                                  new Month  ("February ", 28),
                                  new Month  ("March ", 31),
                                  new Month  ("April ", 30),
                                  new Month  ("May ", 31),
                                  new Month  ("June ", 30),
                                  new Month  ("July ", 31),
                                  new Month  ("August ",31),
                                  new Month  ("September ",30),
                                  new Month  ("October ", 31),
                                  new Month  ("November ", 30),
                                  new Month  ("December ", 31)};
    My array is based on a class I created month and is passed in a string for
    the monthname and an int to set the days as constant to the month. I need
    a method or a loop so that when someone enters data from the keyboard and
    say presses 6 it will automatically recgonise it as June. Any ideas, i
    have already created an inputDate.class and will code my own Date
    Exceptions when I have found a way of initialising the user input with the
    month and the days set for that month.
    Please help

    A vector is a great expanding array utility, you can always add and remove from it, with an Object array you need to set it up with a certain size and that is all you get. You can use the Vector to add Objects to it (days of the week) and specify an index associated with that object, then when you need the object back you can call the Vector's get method with an index alone.
    The benefit for your application would be a lot of saved coding, you see if Monday is 1, Tuesday is 2 and so on and you put them in the array with that index, then you know when a user enters 3 that they have chosen Wendesday and you can call Vector.get( 3 ) and you will automatically get Wednesdays Object. Did I explain that good?

  • Expanding and controller questions

    My company finally filled (physically, with drives) our XRAID. There are now 14 750gb drives in it.
    At this current point it sits like this:
    XXXXXXX YYY ZZZZ
    X = blank drives on upper controller
    Y = raid 5 with spare on lower controller
    Z = raid 5 with spare on lower controller
    My ideal situation would be:
    YYY YYY Spare = RAID 5+1 and spare on upper
    ZZZ ZZZ Spare = RAID 5+1 and spare on lower
    I know how to expand arrays using RAID Admin, but I'm not sure how to MOVE and array from one controller to the next. I would need to move one array to the upper controller and then expand it, and then expand the array on the lower.
    Is it: create the array on the controller then copy the data over?
    Or: create the array and physically swap the drives?
    Or maybe its neither and I have this all wrong. Any advice would be appreciated, thanks.
    Thanks for the help
    Matt

    Thanks everyone for your help.
    So I guess the plan is to build an array on on the upper, copy the data to it, then break and rebuild the other. Not as easy as I hoped but doable.

  • Convert SQL script from one dialect to another.

    Hi all,
    I am trying to convert an SQL script in MySQL dialect to one for
    Firebird (Interbase Open Source fork).
    I will show you the original MySQL script, (one table of 70), what I want it to
    become and then the Java program that I have written which has gone some of
    the way - but I'm not very experienced in Java and I think my approach
    needs to be fundamentally overhauled, it's just that I don't know
    exactly how to go about it - maybe treat the table as a unit and pass my
    file table by table to a/some processing function(s)?
    Here is the original script
    CREATE TABLE `analysis` (
      `analysis_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
      `created` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
      `logic_name` varchar(128) NOT NULL,
      `db` varchar(120) DEFAULT NULL,
      `db_version` varchar(40) DEFAULT NULL,
      `db_file` varchar(120) DEFAULT NULL,
      `type` enum('constitutive_exon','exon','flanking_exon') DEFAULT NULL,
      `program` varchar(80) DEFAULT NULL,
      `program_version` varchar(40) DEFAULT NULL,
      `parameters` text,
      `gff_feature` varchar(40) DEFAULT NULL,
       PRIMARY KEY (`analysis_id`),
       UNIQUE KEY `logic_name_idx` (`logic_name`)
    ) ENGINE=MyISAM AUTO_INCREMENT=261 DEFAULT CHARSET=latin1;I want this to become
    CREATE TABLE analysis
    (  -- ( on a separate line - have done this. Note also - got rid of funny quotes `
      analysis_id smallint(5),  -- see below for what happens to auto_increment
      created TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00', -- datetime now TIMESTAMP -- trivial
      logic_name varchar(128) NOT NULL,
      db varchar(120) DEFAULT NULL,
      db_version varchar(40) DEFAULT NULL,
      db_file varchar(120) DEFAULT NULL,
      type (CHECK TYPE IN ('constitutive_exon','exon','flanking_exon')) DEFAULT NULL,  -- emum becomes a CHECK constraint - parse the string in a function?
      program varchar(80) DEFAULT NULL,
      program_version varchar(40) DEFAULT NULL,
      parameters BLOB SUB_TYPE TEXT,  -- text becomes BLOB SUB_TYPE TEXT - trivial
      gff_feature varchar(40) DEFAULT NULL,
      -- PRIMARY KEY (analysis_id),  -- here's where the fun starts - see below.
      -- UNIQUE KEY logic_name_idx (logic_name) -- more fun here
    )  -- ) on new line
    -- ENGINE=MyISAM AUTO_INCREMENT=261 DEFAULT CHARSET=latin1; - this line gets obliterated - see below.
    -- only data of interest on this line is the AUTO_INCREMENT VALUE - 261
    ALTER TABLE ADD CONSTRAINT analysis_id_PK PRIMARY KEY(analysis_id) USING INDEX analysis_id_PK_IX
    ALTER SEQUENCE analysis_id_seq RESTART WITH 261 -- note auto_increment=261 at end
    -- note, take field name of AutoIncrement - add PK and PK_IX
    -- as required. I will also have to write something similar for the UNIQUE KEY - but
    -- if I can do it for PRIMARY KEY, then it should be easy...I have written a program which has started to do some of the easier stuff
    below - however, I now think that I should be treating a TABLE as a unit rather
    than the line by line processing which I have been doing so far. I would
    like some pointers as to how I could do this - the .sql file has about 70
    tables and I want to be able to process the file in one pass. Any hints,
    recommendations, URLs, tips or other help greatly appreciated - rgs,
    Paul...
    ============= Java listing so far (I have been trying 8-) ) ===========
    import java.io.*;
    import java.util.*;
    class FileProcess
      public static void main(String args[])
        try
          // Open the file that is the first
          // command line parameter
          FileInputStream fistream = new FileInputStream("analysis.sql");
          FileWriter fwstream = new FileWriter("nanalysis.sql");
          // Get the object of DataInputStream
          DataInputStream in = new DataInputStream(fistream);
          BufferedReader br = new BufferedReader(new InputStreamReader(in));
          BufferedWriter out = new BufferedWriter(fwstream);
          String strLine;
          String newLine = System.getProperty("line.separator");
          String newSQLText = "";
          String tblName = "";
          String recordDelims = "[ ]";
          Boolean inTable = false;
          String tableBegin = "(" + newLine;
          System.out.println("\nAnd tableBegin is *_" + tableBegin + "_*");
          //Read File Line By Line
          while ((strLine = br.readLine()) != null)      
            // if strLine.contains(
            // Print the content on the console
            System.out.println ("StrLine = " + strLine);
            newSQLText = getRidOfWierdQuotes(strLine);
            if(strLine.contains("CREATE TABLE"))
              StringTokenizer st = new StringTokenizer(newSQLText, "` ");
              st.nextToken();
              st.nextToken();
              tblName = st.nextToken();
              System.out.println("\nAnd the table name is *_" + tblName + "_*");
            if(strLine.contains(" (") && strLine.contains("CREATE TABLE"))
              System.out.println("\nAnd here's the start of a table!");
              newSQLText = newSQLText.replace(" (", newLine + "(");
             // br.readLine();
              //br.readLine();
            if(strLine.contains(" text,") || strLine.contains(" text "))
              newSQLText = newSQLText.replace(" text", " BLOB SUB_TYPE TEXT");
    //        String sqlTokens[] = newSQLText.split("\\s+");
    //       StringTokenizer stRecord = new StringTokenizer(newSQLText)
            System.out.println("\nNew strLine = " + newSQLText);
            out.write(newSQLText + "\n");
          //Close the input stream
          in.close();
          out.close();
        catch (Exception e)
          //Catch exception if any
          System.err.println("Error: " + e.getMessage());
      public static String getRidOfWierdQuotes(String iString)
        return iString.replace("`", "");
    } // End of class FileProcess

    >
    Your technique looks OK but I would avoid all the cosmetic changes. It doesn't make any
    difference to the script where the newline comes, and the back-quotes are legal SQL too.
    I would confine your activities to what is actually required. Otherwise you are just adding
    implementation difficulties and also running the risk of an ever-expanding scope of what
    the cosmetic improvements should include.Thanks for your reply - however, I would just make two points.
    1) The cosmetic changes are the easiest - and I've essentially implemented them already, and
    for myself, I find that when working on a system with many tables, the way they are presented
    by whatever tool one is using to modify/update/change/run various scripts is very important as
    an aid to organisation and ultimately understanding.
    2) Having essentially completed the cosmetic stuff, I now find myself turning to the group here
    for help with the real "meat" of the project - getting the PRIMARY KEYs and other INDEXES
    &c. into the mix - and this is tricky because of the significant differences between MySQL and
    Firebird SQL dialects - that and the fact that not all AUTO_INCREMENT fields are PRIMARY
    KEYs - so I am looking beyond the mere cosmetic.
    I think I said in my original post that I wanted to perform this task in "one pass". Perhaps what
    I should have said (and might have been clearer) is if I said that I just wanted to run one
    Java programme against the data.
    I now think that my Java programme will have to go through the data a couple of times - on
    the first pass - it will collect the names of those tables for which the AUTO_INCREMENT
    field is* the PRIMARY KEY - putting the name into a Vector (deprecated?) or similar
    and then go through my "cleaned up" sql file adding the KEYs/INDEXes &c. as I go.
    What do you think of this approach - or should I be looking at constructing "Table"
    objects (as arrays of String?) and manipulating those?
    This is my real question - what is the best approach to take - as a 36K guru level
    poster - you probably have an idea - I'm not asking for the work to be done for me,
    as you can see - I've made an effort myself, despite my Java not being to the highest
    pinnacle of coding perfection ;) - any snippets, help, anything appreciated (from
    anybody...) - TIA and rgs,
    Paul...

  • Unexpected Labview Shutdown

    Hi Everyone,
    New Labview user here.  I am using an Agilent multimeter to measure the current of a device over time.  I am communicating with the multimeter through the network.  For the most part, my code runs fine until last night.
    I set it up so that my program would measure and record the data every 60s with 7200 measurements (120 hours).  When I came to check it this morning, Labview was closed, and I initially thought that maybe I closed Labview by accident.  I checked where my data is saved, and the file was saved in the middle of the night (about 750 points of data or 12.5 hours).  Therefore, I didn't close Labview by accident.  The multimeter was still running but not being controlled by Labview.
    My code is also set up to stop running if the device breaks.  The device is not broken.  I don't have any code that records error logs, unfortunately.
    I don't think anyone will be able to give me a definite answer, but are there any ideas?  This is the first time it has happened.  I attached my code.
    Thank you!

    Expanding on what Sam mentioned, here are some suggestions for not having "ever-growing" arrays ...
    Write your Spreadsheet "smarter".  Outside the loop, write the Header, with Append to File set False.  Don't bring that wire into the While Loop (in fact, get rid of the Shift Register altogether).  Instead, simply take the current set of data (which you don't need to Add to Arrays), build a 1D array of the current "row" data, and write that to Spreadsheet, but with Append to File now wired True (which appends the current row to the now-growing Spreadsheet file).
    Eliminate the bottom Shift Register that you are using to accumulate data for an ever-growing Graph.  Instead, use a Chart, which allows you to add "the most recent point", scrolling the plot rightward as new points come in.  Make the Chart "as big as reasonable".
    I don't know your intended use of the middle Shift Register accumulating TimeStamps in ever-growing arrays.  if you are looking for Elapsed Times, use a one-or-several node Shift Register to remember the last N time samples (in case you need more than simply "time between samples"), and simplify this section, as well.  Why keep all of the Times if you only need one to write to the Spreadsheet (once it has been written, why save it)?
    Removing ever-growing arrays from your loop with not only prevent "out of memory" problems, but will also prevent "time leaks" caused by having to stop and allocate more memory for the ever-growing arrays (this might not be a problem here, but why take the chance?).
    Bob Schor

  • How do I count specific, smaller groups of information in one large table?

    Hello all,
    I have a feeling the answer to this is right under my nose, but somehow, it is evading me.
    I would like to be able to count how many photos are in any specific gallery. Why? Well, on my TOC page, I thought it would be cool to show  the user how many photos were in any given gallery displayed on the screen as part of all the gallery data I'm presenting. It's not necessary, but I believe it adds a nice touch. My  thought was to have one massive table containing all the photo information and another massive table containing the gallery  information, and currently I do. I can pull various gallery information  based on user selections, but accurately counting the correct number of  images per gallery is evading me.
    In my DB, I have the table, 'galleries', which has several columns, but the two most relevant are g_id and g_spe. g_id is the primary key and is an AI column that represents also the gallery 'serial' number. g_spec is a value that will have one of 11 different values in it (not relevant for this topic.)
    Additionally, there is the table, 'photos', and in this table are three columns:  p_id, g_id and p_fname. p_id is the primary key, g_id is the foreign key (primary key of the 'galleries' table) and p_fname contains the filename of each photo in my ever-expanding gallery.
    Here's the abbreviated contents of the galleries table showing only the first 2 columns:
    (`g_id`, `g_spec`, etc...)
    (1, 11, etc...),
    (2, 11, etc...),
    (3, 11, etc...),
    (4, 11, etc...),
    (5, 12, etc...),
    (6, 13, etc...)
    Here's the contents of my photos table so far, populated with test images:
    (`p_id`, `g_id`, `p_fname`)
    (1, 1, '1_DSC1155.jpg'),
    (2, 1, '1_DSC1199.jpg'),
    (3, 1, '1_DSC1243.jpg'),
    (4, 1, '1_DSC1332.jpg'),
    (5, 1, '1_DSC1381.jpg'),
    (6, 1, '1_DSC1421.jpg'),
    (7, 1, '1_DSC2097.jpg'),
    (8, 1, '1_DSC2158a.jpg'),
    (9, 1, '1_DSC2204a.jpg'),
    (10, 1, '1_DSC2416.jpg'),
    (11, 1, '1_DSC2639.jpg'),
    (12, 1, '1_DSC3768.jpg'),
    (13, 1, '1_DSC3809.jpg'),
    (14, 1, '1_DSC4226.jpg'),
    (15, 1, '1_DSC4257.jpg'),
    (16, 1, '1_DSC4525.jpg'),
    (17, 1, '1_DSC4549.jpg'),
    (18, 2, '2_DSC1155.jpg'),
    (19, 2, '2_DSC1199.jpg'),
    (20, 2, '2_DSC1243.jpg'),
    (21, 2, '2_DSC1332.jpg'),
    (22, 2, '2_DSC1381.jpg'),
    (23, 2, '2_DSC1421.jpg'),
    (24, 2, '2_DSC2097.jpg'),
    (25, 2, '2_DSC2158a.jpg'),
    (26, 2, '2_DSC2204a.jpg'),
    (27, 2, '2_DSC2416.jpg'),
    (28, 2, '2_DSC2639.jpg'),
    (29, 2, '2_DSC3768.jpg'),
    (30, 2, '2_DSC3809.jpg'),
    (31, 2, '2_DSC4226.jpg'),
    (32, 2, '2_DSC4257.jpg'),
    (33, 2, '2_DSC4525.jpg'),
    (34, 2, '2_DSC4549.jpg'),
    (35, 3, '3_DSC1155.jpg'),
    (36, 3, '3_DSC1199.jpg'),
    (37, 3, '3_DSC1243.jpg'),
    (38, 3, '3_DSC1332.jpg'),
    (39, 3, '3_DSC1381.jpg'),
    (40, 3, '3_DSC1421.jpg'),
    (41, 3, '3_DSC2097.jpg'),
    (42, 3, '3_DSC2158a.jpg'),
    (43, 3, '3_DSC2204a.jpg'),
    (44, 3, '3_DSC2416.jpg'),
    (45, 3, '3_DSC2639.jpg'),
    (46, 3, '3_DSC3768.jpg'),
    (47, 3, '3_DSC3809.jpg'),
    (48, 3, '3_DSC4226.jpg'),
    (49, 3, '3_DSC4257.jpg'),
    (50, 3, '3_DSC4525.jpg'),
    (51, 3, '3_DSC4549.jpg');
    For now, each gallery has 17 images which was just some random number I chose.
    I need to be able to write a query that says, tell me how many photos are in a specific photoset (in the photos table) based on the number in galleries.g_id  and photos.g_id being equal.
    As you see in the photos table, the p_id column is an AI column (call it photo serial numbers), and the g_id column assigns each specific photo to a specific gallery number that is equal to some gallery ID in the galleries.g_id table. SPECIFICALLY, for example I would want to have the query count the number of rows in the photos table whose g_id = 2 when referenced to g_id = 2 in the galleries table.
    I have been messing with different DISTINCT and COUNT methods, but all seem to be limited to working with just one table, and here, I need to reference two tables to acheive my result.
    Would this be better if each gallery had its own table?
    It should be so bloody simple, but it's just not clear.
    Please let me know if I have left out any key information, and thank you all in advance for your kind and generous help.
    Sincerely,
    wordman

    bregent,
    I got it!
    Here's the deal: the query that picks the subset of records:
    $conn = dbConnect('query');
    $sql = "SELECT *
            FROM galleries
            WHERE g_spec = '$spec%'
            ORDER BY g_id DESC
            LIMIT $startRow,".SHOWMAX;
    $result = $conn->query($sql) or die(mysqli_error());
    $galSpec = $result->fetch_assoc();
    picks 3 at a time, and with each record is an individual gallery number (g_id). So, I went down into my code where a do...while loop runs through the data, displaying the info for each subset of records and I added another query:
    $conn = dbConnect('query');
    $getTotal = "SELECT COUNT(*)
                FROM photos
                WHERE g_id = {$galSpec['g_id']}
                GROUP BY g_id";
    $total = $conn->query($getTotal);
    $row = $total->fetch_row();
    $totalPix = $row[0];
    which uses the value in $galSpec['g_id']. I didn't know the proper syntax for including it, but when I tried the curly braces, it worked. I altered the number of photos in each gallery in the photos table so that each total is different, and the results display perfectly.
    And as you can see, I used some of the code you suggested in the second query and all is well.
    Again, thank you so much for being patient and lending me your advice and assistance!
    Sincerely,
    wordman

  • WSUS For Clients With No Internet Access

    This is more of a functional question than an issue.
    Right now I have WSUS set to 'Store update files locally' and it works great.  With an ever expanding number and size of updates, I don't have space to keep storing the necessary updates on my WSUS server.
    If I set WSUS to 'Do not store update files locally', will my clients without internet access still be able to get updates?  Many of my devices are behind firewalls that do not permit access to the internet in any form.  I'm trying to avoid adding
    storage if at all possible.
    Thanks,
    Brian

    Correct, if you set WSUS to 'do not store update files locally', then your clients without internet access will not be able to access Microsoft Update to download the files without you creating a firewall exception. Which sounds like an awkward way to do
    it.
    (1) Are you on top of your regular maintenance with WSUS, ie, declining superseded updates, running Server Cleanup Wizard in the recommended order?
    (2) Are you confident that the classification of updates being downloaded is appropriate and nothing un-needed (e.g drivers/absent OS) are being downloaded?  Have you chosen to download the space-hogging express installation files?
    (1) and (2) would be generally better practise then 'do not store updates locally', but if bandwidth is cheap or irrelevant for you, then perhaps you might be tempted to not store updates locally. In your situation where you have a reason to deny clients
    internet access, it would seem like a lot of paperwork, and technical expertise, to only allow them internet access for updates.  (plus, I'm not sure it's possible, just presume it would be)
    What are your numbers?  (size of WSUSContent, WSUSDatabase, space on drives?)

Maybe you are looking for