Startup process gets longer and longer
Hi, my parents have an iMac (purchased in 09) running OS X 10.9.5. Over the years and especially lately, the time it takes to startup has gotten longer and longer.
Any advice or suggestions would be much appreciated.
Babowa,
Below are the results of running etrecheck and I apologize for taking so long getting the results to you! Thanks very much!
Problem description:
Startup process takes longer and longer (especially lately)!
EtreCheck version: 2.1.5 (108)
Report generated December 19, 2014 at 12:29:58 PM PST
Click the [Support] links for help with non-Apple products.
Click the [Details] links for more information about that line.
Click the [Adware] links for help removing adware.
Hardware Information: ℹ️
iMac (20-inch, Early 2009) (Verified)
iMac - model: iMac9,1
1 2.66 GHz Intel Core 2 Duo CPU: 2-core
4 GB RAM Upgradeable
BANK 0/DIMM0
2 GB DDR3 1067 MHz ok
BANK 1/DIMM0
2 GB DDR3 1067 MHz ok
Bluetooth: Old - Handoff/Airdrop2 not supported
Wireless: en1: 802.11 a/b/g/n
Video Information: ℹ️
NVIDIA GeForce 9400 - VRAM: 256 MB
iMac 1680 x 1050
System Software: ℹ️
OS X 10.9.5 (13F34) - Uptime: 0:4:4
Disk Information: ℹ️
Hitachi HDT721032SLA380 disk0 : (320.07 GB)
EFI (disk0s1) <not mounted> : 210 MB
Recovery HD (disk0s3) <not mounted> [Recovery]: 650 MB
Macintosh HD (disk1) / : 318.88 GB (275.79 GB free)
Encrypted AES-XTS Unlocked
Core Storage: disk0s2 319.21 GB Online
HL-DT-ST DVDRW GA11N
USB Information: ℹ️
Apple Inc. Built-in iSight
Apple Computer, Inc. IR Receiver
Apple Inc. BRCM2046 Hub
Apple Inc. Bluetooth USB Host Controller
Gatekeeper: ℹ️
Mac App Store and identified developers
Adware: ℹ️
Conduit [Remove]
Startup Items: ℹ️
HP IO: Path: /Library/StartupItems/HP IO
Startup items are obsolete in OS X Yosemite
Launch Daemons: ℹ️
[loaded] com.adobe.fpsaud.plist [Support]
User Launch Agents: ℹ️
[running] com.zeobit.MacKeeper.Helper.plist [Support]
User Login Items: ℹ️
iTunesHelper UNKNOWNHidden (missing value)
Skype UNKNOWN (missing value)
Documents FolderHidden (/Users/[redacted]/Documents)
HP Scheduler Application (/Library/Application Support/Hewlett-Packard/Software Update/HP Scheduler.app)
Internet Plug-ins: ℹ️
FlashPlayer-10.6: Version: 15.0.0.246 - SDK 10.6 [Support]
QuickTime Plugin: Version: 7.7.3
Flash Player: Version: 15.0.0.246 - SDK 10.6 Mismatch! Adobe recommends 16.0.0.235
Default Browser: Version: 537 - SDK 10.9
OfficeLiveBrowserPlugin: Version: 12.3.6 [Support]
Silverlight: Version: 4.0.60129.0 [Support]
iPhotoPhotocast: Version: 7.0
3rd Party Preference Panes: ℹ️
Flash Player [Support]
Time Machine: ℹ️
Skip System Files: NO
Auto backup: NO - Auto backup turned off
Destinations:
Dic's Thumb Drive [Local]
Total size: 0 B
Total number of backups: 0
Oldest backup: -
Last backup: -
Size of backup disk: Excellent
Backup size 0 B > (Disk size 0 B X 3)
Top Processes by CPU: ℹ️
54% Mail
7% WindowServer
4% HP Device Monitor
4% NotificationCenter
3% opendirectoryd
Top Processes by Memory: ℹ️
198 MB Mail
112 MB mds_stores
94 MB Microsoft Word
82 MB WindowServer
77 MB Safari
Virtual Memory Information: ℹ️
1.45 GB Free RAM
1.63 GB Active RAM
460 MB Inactive RAM
483 MB Wired RAM
422 MB Page-ins
0 B Page-outs
Diagnostics Information: ℹ️
Dec 19, 2014, 12:26:32 PM Self test - passed
Similar Messages
-
Why do contacts name keep getting longer and longer?
On some of my contacts, the name keeps getting longer and longer.
For example, if my name is Jason Kahng.
After a few weeks, when I lookup my name in Contacts it comes up as "Jason Kahng Jason Kahng Jason Kahng"
This is very annoying. If I edit the name back down to Jason Kahng, it will do this again a few weeks later.
Please let me know how to fix this problem.
Thank you!Your iPhone uses Wi Fi to sync your contacts, not iTunes with iOS 5.0.1
Tap Settings > iCloud
Switch Contacts off then back on then reset the iPhone.
Hold the On/Off Sleep/Wake button and the Home button down at the same time for at least ten seconds, until the Apple logo appears. -
The pinwheel is spinning for longer and longer periods of time. Why and what do I need to do about that?
You have 10.6 on that machine, I suggest you stick with it for performance, third party hardware and software reasons as long as possible.
Consider 10.8 (not 10.7) when it's released, because 10.7 and 10.8 will require a new investment in software and newer third party hardware as it requires newer drivers the old machines won't have. (forced upgrade because of software, really nice of them)
http://roaringapps.com/apps:table
Far as your Safari problem do these things until it's resolved:
1: Software Update fully under the Apple menu.
2: Check the status of your plug-ins and update (works for all browsers) also install Firefox and see if your problems continue. You should always have at least two browsers on the machine just in case one fails.
https://www.mozilla.org/en-US/plugincheck/
Flash install instructions/problem resolution here if you need it.
How to install Flash, fix problems
3: Install Safari again from Apple's web site
https://www.apple.com/safari/
4: Run through this list of fixes, stopping with #16 and report back before doing #17
Step by Step to fix your Mac -
3.x Workbooks taking longer and longer to run each week
Hey all, I have a user who has embedded 5 versions of the same query into a workbook. He runs this workbook every monday. When he first created the workbook it took 30 minutes to run. Each week that goes by the workbook takes longer and longer to run and eventually gets to the runtime of 2 hours. Periodically my user has to go and make a change to the workbook and after he recreates the workbook then it goes back to taking 30 minutes to run.
Is there some kind of a buffer that is filling up that I don't know about? Is there a way I can refresh the workbook so that the runtime doesn't creep like it is doing?
Thanks
AdamGuess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!
-
Word File Takes Longer and Longer to Save
I’ve been working daily on the same 100-page document for several months. I had no problems with the document under Word 2003. However, under Word 2010 the file grows in size over time, and saving the file takes longer and longer to the point where there
are significant timeouts (Word is “not responding”) whenever an automatic save occurs.
I don't want to diable automatic saves because Word does crash for me on rare occasions, generally if I do something "too fast".
I’ve found a workaround for the problem, and every three weeks or so after the automatic saves have become painfully long I copy the document to the clipboard (except for the last paragraph mark) and then I open a new document based on the relevant template
and I paste the clipboard contents into the new document. I rename the old version of the document and the new version becomes the working version. This reduces the file size (currently around 1.4 MB) by about 150 KB and the problem goes away for another three
weeks.
Certain aspects of my situation are unusual, and these may or may not be relevant to the problem:
At the end of each day I use (via a macro) the Review, Compare feature of Word to compare the document with the previous day’s version to allow me to reread any changes I made to it.
I use various other macros for intelligent page-turning, resizing windows, smart Find, etc.
I maintain the document as a DOC file (Word 97-2003 Compatibility Mode) because I need to share the document with an organization that requires this format.
The document flips back and forth a few times between being a one-column and two-column document.
The document has a table of contents on the last page.
The headings in the document have embedded section and subsection numbers.
The document has numerous embedded SEQ and cross-reference fields.
The document has embedded EMF pictures that were generated by a non-Microsoft application.
The long times to save the file and the temporary solution I’ve found to the problem suggest that some "junk" is accumulating “in” the last paragraph mark. This junk doesn't cause any operational errors, but it slows things down to the point where
the auto-save times out and I temporarily get the distracting "not responding" message. It would be nice if Word could automatically eliminate the junk in the last paragraph mark so that I wouldn’t have to do it manually.
Do you have any suggestions for how I might eliminate the problem?
I'd be pleased to send a copy of the slow-saving file to a Microsoft Word programmer for diagnosis of the problem.
I have up-to-date Windows 7 professional (64 bit) and Word 2010 14.0.6129.5000 (32 bit).
Thanks for your help,
Don MacnaughtonI am experiencing exactly the same save issue, although I cannot use the suggestion of copying to a new document as I have allot of references within the same document and I'm scared that I'll loose them (or mess them up).
It is nearly a year later, did you have any luck?
Francois,
I'm still experiencing the problem. However, I've now converted the document from a DOC to a DOCX, but that made no difference. So every 18 or so days I copy all of the document into a new document except for the last paragraph mark
and the problem goes away for another 18 or so days. For my document this solution is fully reliable although it's less convenient because it's a little complicated and I worry I may make a mistake or some text may be lost in the transition.
So I'm still looking for a solution to the problem. Is there anything unique about your document or your handling of the document that might be the cause of the problem? Are you using macros, Compare Versions, switching back and forth between
one and two columns, or anything else that is common to the features that I list in my first post in this thread?
You might want to try my copying solution as a test while keeping your original document as the official version that you continue to work with. You could then check the test document very carefully to see if my solution works with your
document. You might find that you can trust my solution (or you might not).
By the way, I make sure that the copy worked properly by doing a Compare Versions of the old and new documents. (Surprisingly, sometimes the compare finds very minor differences between the two documents, but usually not.)
If the problem really bothers you, you can hire Microsoft Support, although that will cost you some money. If you do that, please let us know the outcome.
Don Macnaughton -
Migration of LONG and LONG RAW datatype
Just upgraded a DB from 8.1.7.4 to 10.2.0.1.0. In the post-upgrade tasks, it speaks of migrating tables with LONG and LONG RAW datatypes to CLOB's or BLOB's. All of my tables in the DB with LONG or LONG RAW datatypes are in the sys, sysman, mdsys or system schemas (as per query of dba_tab_columns). Are these to be converted? Or, does Oracle want us to convert user data only (user_tab_columns)?
USER_TAB_COLUMNS tells you the columns in the tables owned by the current user. There may well be many users on your system that you created that contain objects. I suppose you could log in to each of those schemas and query their USER_TAB_COLUMNS table, but it's probably easier to query DBA_TAB_COLUMNS with an appropriate WHERE clause on the owner of the objects.
Justin -
How can I reset my sons restrictions pass code? I have failed 9 attempts and it locks me out longer and longer.thx
see here
http://support.apple.com/kb/HT1212
No choice -
Compressor times getting longer and longer...what is going on?
I burn two game dvds per week about the same times (1hr-1hr 10min. I have been using compressor preset of 90 min best(mpeg-2) and then upping the bit rate to get 3.4 to 3.8 gb per dvd. The times to process are getting ridiculous. Normally it would take 2-3 hrs to process but in the last week the last 3 compressor processing times were 5hrs, 7hrs, and then today 22hrs. As far as I know I haven't changed anything. What could bee going on and where should I check first? Im' using FCS 2 Thanks!
In case you haven't already, try some of the things outlined in the Troubleshooting Basics for Compressor.
For your scenario, I would try to the Clear QMaster Cache first. If that doesn't seem to solve anything, try Repair File Permissions, then Delete Preferences.
Or are those roads you've travelled already? -
SQLite Inserts Taking Longer and Longer
The application I'm working on makes repeated calls to a webservice to get data that is then cached in a local sqlite db for the user. Once the db hits ~5mb it starts taking painfully long to run each set of inserts. Calls to the webservice remain quick. There had been an issue with XML not being garbage collected, but I fixed that and now the profiler shows consistent memory usage.
I've tried running the inserts with indexes on and off. I've tried batching the inserts in transactions of 100, or the entire set. The db calls are synchronous.
Running the queries against the database directly (not through the AIR application) suggest that there isn't a slowdown at the 5mb mark there, which is consistent with my experiences with SQLlite. Restarting the application and continuing to download data into an existing project does not resolve the issue, it starts off slow.
So...does anyone have any ideas of other things to try to get insert performance up to a reasonable level? Has anyone else run into similiar issues? Is anyone inserting into 20mb+ dbs and not seeing degrading performance?
Thanks for the help!Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!
-
Maintenance jobs taking longer and longer
On 2 of our GroupWise servers the weekly maintenance jobs are taking much longer to run.
What used to finish before I got in at 7:30am is now still running at 4PM right now on one of the servers. Last monday, they finished by noon.
We are running GroupWise 8.03 on Netware.
The two servers have about 300 Gig of data each.
The issue of maintenance jobs running later started a few months ago, but today is the worst.
Should I increase the GWWorker threads from to something higher? I think I may want to push the startup times earlier too.
Other ideas?
thanks
Phil JGuess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!
-
I got a new harddrive installed this year and it is difficult to move between pages or open or close anything instantly...waiting time varies...
Let's establish some specifics:
What model MBA do you have?
How much RAM
What HD? Original PATA drive, original SSD design or newer style SSD/RAM drive?
The file(s) that we're working with, what are they? How large are they and what program(s) are we using in conjunction with them?
We need to try and figure out if this is an application or an OS X issue. -
Can't fetch clob and long in one select/query
I created a nightmare table containing numerous binary data types to test an application I was working on, and believe I have found an undocumented bug in Oracle's JDBC drivers that is preventing me from loading a CLOB and a LONG in a single SQL select statement. I can load the CLOB successfully, but attempting to call ResultSet.get...() for the LONG column always results in
java.sql.SQLException: Stream has already been closed
even when processing the columns in the order of the SELECT statement.
I have demonstrated this behaviour with version 9.2.0.3 of Oracle's JDBC drivers, running against Oracle 9.2.0.2.0.
The following Java example contains SQL code to create and populate a table containing a collection of nasty binary columns, and then Java code that demonstrates the problem.
I would really appreciate any workarounds that allow me to pull this data out of a single query.
import java.sql.*;
This class was developed to verify that you can't have a CLOB and a LONG column in the
same SQL select statement, and extract both values. Calling get...() for the LONG column
always causes 'java.sql.SQLException: Stream has already been closed'.
CREATE TABLE BINARY_COLS_TEST
PK INTEGER PRIMARY KEY NOT NULL,
CLOB_COL CLOB,
BLOB_COL BLOB,
RAW_COL RAW(100),
LONG_COL LONG
INSERT INTO BINARY_COLS_TEST (
PK,
CLOB_COL,
BLOB_COL,
RAW_COL,
LONG_COL
) VALUES (
1,
'-- clob value --',
HEXTORAW('01020304050607'),
HEXTORAW('01020304050607'),
'-- long value --'
public class JdbcLongTest
public static void main(String argv[])
throws Exception
Driver driver = (Driver)Class.forName("oracle.jdbc.driver.OracleDriver").newInstance();
DriverManager.registerDriver(driver);
Connection connection = DriverManager.getConnection(argv[0], argv[1], argv[2]);
Statement stmt = connection.createStatement();
ResultSet results = null;
try
String query = "SELECT pk, clob_col, blob_col, raw_col, long_col FROM binary_cols_test";
results = stmt.executeQuery(query);
while (results.next())
int pk = results.getInt(1);
System.out.println("Loaded int");
Clob clob = results.getClob(2);
// It doesn't work if you just close the ascii stream.
// clob.getAsciiStream().close();
String clobString = clob.getSubString(1, (int)clob.length());
System.out.println("Loaded CLOB");
// Streaming not strictly necessary for short values.
// Blob blob = results.getBlob(3);
byte blobData[] = results.getBytes(3);
System.out.println("Loaded BLOB");
byte rawData[] = results.getBytes(4);
System.out.println("Loaded RAW");
byte longData[] = results.getBytes(5);
System.out.println("Loaded LONG");
catch (SQLException e)
e.printStackTrace();
results.close();
stmt.close();
connection.close();
} // public class JdbcLongTestThe problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
rs = stmt.executeQuery("select myLong, myNumber from tab");
while (rs.next()) {
int n = rs.getInt(2);
String s = rs.getString(1);
The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
Douglas -
Mapping CLOB and Long in xml schema
Hi,
I am creating an xml schema to map some user defined database objects. For example, for a column which is defined as VARCHAR2 in the database, I have the following xsd type mapping.
<xsd:element name="Currency" type="xsd:string" />
If the oracle column is CLOB or Long(Oracle datatype), could you please tell me how I can map it in the xml schema? I do not want to use Oracle SQL type like:
xdb:SQLType="CLOB" since I need a generic type mapping to CLOB. Would xsd:string still hold good for CLOB as well as Long(Oracle datatype) ?
Please help.
Thanks,
Vadi.The problem is that LONGs are not buffered but are read from the wire in the order defined. The problem is the same as
rs = stmt.executeQuery("select myLong, myNumber from tab");
while (rs.next()) {
int n = rs.getInt(2);
String s = rs.getString(1);
The above will fail for the same reason. When the statement is executed the LONG is not read immediately. It is buffered in the server waiting to be read. When getInt is called the driver reads the bytes of the LONG and throws them away so that it can get to the NUMBER and read it. Then when getString is called the LONG value is gone so you get an exception.
Similar problem here. When the query is executed the CLOB and BLOB locators are read from the wire, but the LONG is buffered in the server waiting to be read. When Clob.getString is called, it has to talk to the server to get the value of the CLOB, so it reads the LONG bytes from the wire and throws them away. That clears the connection so that it can ask the server for the CLOB bytes. When the code reads the LONG value, those bytes are gone so you get an exception.
This is a long standing restriction on using LONG and LONG RAW values and is a result of the network protocol. It is one of the reasons that Oracle deprecates LONGs and recommends using BLOBs and CLOBs instead.
Douglas -
Firefox getting slower and slower
firefox has been working great all of a sudden pages are taking longer and longer to load and unable to load search engines,in firefox or explorer. search engines work fine on second computer.
I am having the same problem as GTA_doum. Firefox gets slower and slower if left open for a day or two, and then when I restart Firefox it's fine for a day or two and then gets slower and slower. Help!
-
Prepared Statemet executeQuery() getting slower and slower
Hi,
I have a servlet that do the following:
1.- Construct the where clause for a query with the data the user has sent.
2.- Get a connection from the pool.
3.- Load a temporary table with a select that uses the where clause created on the first step.
4.- Create a prepared statement as simple as "select * from temp_table where rownum < 500"
5.- While there is still data on the temporary table, loop
5.1.- Extract the information of the result set the Prepared Statemet executeQuery() returns.
5.2.- Delete the rows of the temporary table that had been read on the 5.1 step
5.3.- Check if there is more data
6.- Close the result set, statements, ...
The first runs of the loop are very fast, on the order of 500 ms. When the servlet has executed some times the loop, each run of the loop starts getting slower and slower, growing the time of the loop with each run.
Do anyone knows why the loop takes longer and longer with each run?
Regards,
CptnAgua... initialization of the servlet and reading the parameters
// The connection object is created and get a connection
// from the application servlet pool
ConexionBaseDatos bbdd;
bbdd = new ConexionBaseDatos();
try {
bbdd.conecta();
} catch (Exception e) {
... exception code
String sql;
String where = " where 1=1";
the where clause is created depending on the parameters received by the sevlet
String insert;
// here all the variables are created and most of them initialized
insert = "insert into temp_table select * from conciliation_v x " + where;
Statement stmt = null;
PreparedStatement prepStmt = null;
ResultSet rs = null;
SimpleDateFormat formatoFecha = null;
Date hoy = null;
FileWriter fstream = null;
BufferedWriter file = null;
String rowid_update = "";
String rowid_select = "";
formatoFecha = new SimpleDateFormat("yyyyMMddHHmmss");
hoy = new Date();
nombreFichero = "";
// the floder where the servlet is going to store the output is read
// from the server.xml
nombreFichero =
this.getInitParameter("DirectorioEscritura") + nombreFichero;
File fichero = new File(nombreFichero);
// if the file already exists, the servlet stops.
if (fichero.exists()) {
out.println("El fichero especificado ya existe.");
... exception code
LogManager.shutdown();
return;
sql = "select * from temp_table where rownum < 500";
fstream = new FileWriter(nombreFichero);
file = new BufferedWriter(fstream);
try {
// here the temp table is filled with the update string already defined
stmt = bbdd.getStmt();
stmt.executeUpdate(insert);
// and the query is preparsed
prepStmt = bbdd.getPrepStmt(sql);
prepStmt.setFetchSize(500);
} catch (SQLException e) {
... exception code
String linea;
String field1 = "";
String field2 = "";
String field3 = "";
String field4 = "";
String field5 = "";
String field6 = "";
String field7 = "";
String field8 = "";
String field9 = "";
String field10 = "";
String field11 = "";
String field12 = "";
String field13 = "";
boolean hayValores = true;
// the loop starts
while (hayValores) {
hayValores = false;
try {
// The preparsed statement is executed
rs = prepStmt.executeQuery();
} catch (SQLException e) {
... exception code
try {
while (rs.next()) {
hayValores = true;
try {
field1 =
lPad(rs.getString("field1"), 10); //50
field2 =
lPad(rs.getString("field2"), 18); //18
field3 =
lPad(rs.getString("field3"), 10); //0
field4 =
lPad(rs.getString("field4"), 50); //50
field5 =
lPad(rs.getString("field5"),
20); //10 + number
field6 =
lPad(rs.getString("field6"), 16); //number
field7 =
lPad(rs.getString("field7"),
10); // 10
field8 =
lPad(rs.getString("field8"), 10); //10
field9 = lPad(rs.getString("field9"), 1); //1
field10 = lPad(rs.getString("field10"), 10); // 0
field11 = lPad(rs.getString("field11"), 10); // 0
field12 = lPad(rs.getString("field12"), 10); // 0
field13 = lPad(rs.getString("field13"), 10); // 0
rowid_update =
rowid_update + "rowid = '" + rs.getString("a_rowid") +
"' OR ";
rowid_select =
rowid_select + "'" + rs.getString("a_rowid") +
} catch (Exception exp) {
... exception code
linea =
field1 + field2 + field3 + field4 +
field5 + field6 +
field7 + field8 +
field9 + field10 + field11 + field12 + field13 +
"\r\n";
file.write(linea);
} catch (Exception e) {
... exception code
file.flush();
if (hayValores) {
String delete;
delete =
"delete from temp_table where a_rowid in (" +
rowid_select.substring(0, rowid_select.length() - 2) +
rowid_select = "";
try {
stmt.executeUpdate(delete);
} catch (Exception exp) {
... exception code
file.close();
try {
rs.close();
} catch (Exception e) {
... exception code
try {
prepStmt.close();
} catch (Exception e3) {
... exception code
Do the already extracted lines must be flagged as extracted?
if (tipoExtraccion.toUpperCase().equals("AUTO") ||
marca.toUpperCase().equals("S")) {
if (!(rowid_update.equals(""))) {
rowid_update =
rowid_update.substring(0, rowid_update.length() - 3);
String update =
"UPDATE sys_conciliation.conciliation " + "SET FECHA_INTEGRADO = SYSDATE " +
"WHERE " + rowid_update;
try {
stmt.executeUpdate(update);
bbdd.commit();
} catch (SQLException e) {
... exception code
try {
bbdd.close();
} catch (Exception e) {
... exception code
try {
stmt.close();
} catch (Exception e) {
... exception code
out.println("OK");
out.close();
LogManager.shutdown();
public String rPad(String campo, int longitud) {
if (campo == null) {
campo = "";
int lcampo = campo.length();
if (lcampo > longitud)
return (campo.substring(0, longitud));
if (lcampo == longitud)
return (campo);
String nuevoCampo = campo;
for (int a = 0; a < longitud - lcampo; a++) {
nuevoCampo = nuevoCampo + " ";
return (nuevoCampo);
public String lPad(String campo, int longitud) {
if (campo == null) {
campo = "";
int lcampo = campo.length();
if (lcampo > longitud)
return (campo.substring(0, longitud));
if (lcampo == longitud)
return (campo);
String nuevoCampo = campo;
for (int a = 0; a < longitud - lcampo; a++) {
nuevoCampo = " " + nuevoCampo;
return (nuevoCampo);
}Message was edited by:
CptnAgua
Maybe you are looking for
-
Re-install​ing win7 on a secondary drive.
Dear all, I just ordered a T420s with a 160Gb SSD + a HD adapter 43N3412 + a 500Gb hard disk. I am planning to use mostly linux, and to keep win7 for games. Is there a standard procedure to erase win7 from the SSD and re-install it from the recovery
-
No partner functions for partner determination profile found
Dear Experts, I encountered a situation whereby a message above is displayed as Info on ICWC. I have configured a new Agent Inbox and IC Webclient Profile. I simulated user login and was prompted with the above info. User has a BP record created of t
-
Tag files to be invisible in FrontRow
Hi all, Is there a way to somehow tag files to be invisible in FronRow? Thanks, Ziv
-
Mapping users from resources to user in IDM
Hi I have a user say tom123 in Windows NT and the same user also exists in LDAP with accountid tom1. How do i load them into IDM as a single user with different resources? Any ideas are appreciated
-
StackOverflowError using spring WS over JMS on Weblogic 9.2
Hi, We are using Spring WS to poll a JMS queue fro new messages. On deployment this works fine. After some time however (e.g. if I leave the system running overnight), it starts to fail. (There are no requests going through the system in this time) T