HPTouchsmart 300-1000
New frustrated user.
How do I QUICKLY access a real HP english speaking person for major system issue?
New Touchsmart (3months)- hard drive crashed.
Support person was foreign..unable to understand. No HP support in my area.
Have replaced the hard drive now looking to HP for reimbursement etc.
Working/navigating this support site is worse than waterboard torture.
Any help would be appreciated.
Thank you
This is a peer to peer site, it is not staffed or answered by HP, so you're stuck with mediocre phone support with everyone else, there's not magic solution.
Similar Messages
-
How to do a clean installation of windows 7 on hp touchsmart 300-1000
My hard drive recently crashed in my touchsmart 300. I have purchased a new one but when I insert the installation cd it shortly freezes on starting windows screen right after it installs the windows files for the first time. I have recently read threads about this and saw the safe mode option and disabling the video driver but I can not get to that screen. After I hit any key to boot from dvd i tap F8 and it starts loading windows files, then when it is done it gives me the option to run in safe mode then installs files and freezes again. Can somebody please help me!!!! I am going on over a week straight trying to fix this. Thanks
The title says Windows 7 but the description of what is used to start does not sound right. According to this PDF for the model posted: http://h10032.www1.hp.com/ctg/Manual/c03472361.pdf
Go to page 56 under "Starting system recovery from user-created recovery discs
This section contains the procedure for performing a system recovery from the recovery discs you created"
If Windows 7 is installed, the F8 should get the user into Safe Mode. Remove the Recovery DVD from the DVD drive before this action is done. What happens now?
{---------- Please click the "Thumbs Up" to say thanks for helping.
Please click "Accept As Solution" if my help has solved your problem. ----------}
This is a user supported forum. I am a volunteer and I do not work for HP. -
XA overhead in call to prepare, taking up to 1000 ms
Hello everyone.
In a particular use-case on our load-test environment (similar to production) where customer data is being updated via a SOAP from a weblogic 10.3 (JDBC driver 11.2.0.2.0) to two 11gR2 RAC cluster (which leads to a lot of SQL queries, including DML and a JMS message) we experience execution times of oracle.jdbc.xa.client.OracleXAResource.prepare(Xid) (which is being called once at the end of the service-call) which are far from being acceptable, about 300-1000 ms.
We measured the execution times with java profilers (dynaTrace, MissionControl). To ensure these values are valid we put the ojdbc6_g.jar in place and saw the long times in the logs.
Example:
<record>
<date>2011-07-27T16:48:45</date>
<millis>1311785325858</millis>
<sequence>7265</sequence>
<logger>oracle.jdbc.xa.client</logger>
<level>FINE</level>
<class>oracle.jdbc.xa.client.OracleXAResource</class>
<method>prepare</method>
<thread>11</thread>
<message>41B70007 Exit [354.443ms]</message>
</record>
We took a TCP dump in order to see what is being sent to the database but couldn't decompile what exactly is being transfered via the NET8 protocol.
From what I've read (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/xadistra.htm) the thin driver should be using the native XA by default so this should not be a reason for the poor performance.
We have many other services that do similar DML but don't show this behavior, so it must be something specific.
From the profiling and TCP dumping we are pretty sure the time is being spent on the DB side.
This assumption was strengthened by the odd fact that this monday, after no usage of the system on the weekend, the overhead suddenly just dissapeared! The execution times were as low as one would expect (~5-10 ms). We saw an out of memory (ORA-4030) occured on Saturday, which is still in investigation by the provider.
I suspected that the long prepare times would come back after some load, so I initiated a load test which executes these use cases and simulates a real life scenario. After 1 or 2 hours, this was the case. Now we are in the same situation as before. Again it is reproducable with single calls and no other load on the DB. I imagined there might have been restarts of the instances or something similar in order to recover from the ORA-4030 so I initiated restarts of all instances but without success.
This is were we are right now, the experiences so far lead imho to the following conclusions/assumptions:
1. The time is being spent on the DB (maybe partly somewhere in the network)
2. We are most probably experiencing an erroneous behavior because we had a situation were the issue did not occur, but we don't know why (yet)
3. Maybe it was by accidential circumstances on monday that the problem dissapeared and it had nothing to do with our load-test later on that it is back now (since the physical hardware (DB server and storage) is shared but we see no contention on CPU,RAM or I/O)
4. JMS should not be the issue because we see a dedicated prepare call which is fast and it's handled locally on the AppServer
The big question is, how can we pin down where exactly on the DB the time is being spent? Is there a way to find out how long each participating RM takes in order to handle the prepare-call?
Any help would be greatly appreciated, these execution times can threaten our SLA.
Kind regards,
Thomas
PS: We've opened an SR as well, but there's has not been a lot of useful information so far. This statement is not very promising "There is no specific mechanism to find out why the prepare state takes time."Hi Thomas, you can do some test before recommend enable XA at RAC level.
(Please check if the jdbc driver need access to PL/SQL level of XA procedures or the JDBC just use the API XA native of Oracle 11).
check if you are using the jdbc for 11G.
- A simple test to check the response time just please do shutdown abort (one node) and check the response time on the other node.
- After do this test, shutdown the database and start the databases (both) to start a clean scenario and do some tests, if you feel the system goes slow just check the lock at RAC level if you see the same SID locking the same object on both nodes, well you need to run XA scripts on ypu database, if not keep looking. if you don't have the script to check the lock at RAC level, just let me know I can publish the scripts for you. on RAC 10G all the time just I run the XA scripts because some client need some PL/SQL api XA. EX: .COM or .NET over windows 2003 or windows 2008. -
Unable to display data no entry in the table without using Model clause
Hi,
I've an urgent requirement described below :
The previously posted Question has been answerted using Model Clause:
Is there any way out to solve it without using Model clause:
I've a table named as "sale" consisting of three columns : empno, sale_amt and sale_date.
(Please ref. The table script with data as given below)
Now if I execute the query :
"select trunc(sale_date) sale_date, sum(sale_amt) total_sale from sale group by trunc(sale_date) order by 1"
then it displays the data for the dates of which there is an entry in that table. But it does not display data for the
date of which there is no entry in that table.
If you run the Table script with data in your schema, then u'll see that there is no entry for 28th. Nov. 2009 in
sale table. Now the above query displays data for rest of the dates as its are in sale table except for 28th. Nov. 2009.
But I need its presence in the query output with a value of "sale_date" as "28th. Nov. 2009" and that of "total_sale" as
"0".
Is there any means to get the result as I require?
Please help ASAP.
Thanks in advance.
Create table script with data:
CREATE TABLE SALE
EMPNO NUMBER,
SALE_AMT NUMBER,
SALE_DATE DATE
SET DEFINE OFF;
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(100, 1000, TO_DATE('12/01/2009 10:20:10', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(100, 1000, TO_DATE('11/30/2009 10:21:04', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(100, 1000, TO_DATE('11/29/2009 10:21:05', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(100, 1000, TO_DATE('11/26/2009 10:21:06', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(100, 1000, TO_DATE('11/25/2009 10:21:07', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(200, 5000, TO_DATE('11/27/2009 10:23:06', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(200, 4000, TO_DATE('11/29/2009 10:23:08', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(200, 3000, TO_DATE('11/24/2009 10:23:09', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(200, 2000, TO_DATE('11/30/2009 10:23:10', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(300, 7000, TO_DATE('11/24/2009 10:24:19', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(300, 5000, TO_DATE('11/25/2009 10:24:20', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(300, 3000, TO_DATE('11/27/2009 10:24:21', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(300, 2000, TO_DATE('11/29/2009 10:24:22', 'MM/DD/YYYY HH24:MI:SS'));
Insert into SALE
(EMPNO, SALE_AMT, SALE_DATE)
Values
(300, 1000, TO_DATE('11/30/2009 10:24:22', 'MM/DD/YYYY HH24:MI:SS'));
COMMIT;
Any help will be needful for me
Regards,select sale_date,sum(sale_amt) total_sale
from
select empno,0 sale_amt,(sale_date + ao.rn) sale_date
from
select empno,sale_amt,sale_date ,(t.nxt_dt - t.sale_date) diff
from
select empno
,sale_amt,trunc(sale_date) sale_date
,trunc(nvl(lead(sale_date) over (partition by 1 order by sale_date),sale_date)) nxt_dt
from sale
) t
where (t.nxt_dt - t.sale_date) >1
) rec,(select rownum rn from user_objects where rownum<=200) ao
where ao.rn <=(rec.diff-1)
union all
select empno,sale_amt,trunc(sale_date) sale_date
from sale
group by sale_date
order by 1;
~~~~Guess this will serve the purpose...
Cheers Arpan -
As an asset management tool it should manage all images but only adjust raw
Is Aperture an image management tool, raw converter/workflow tool, or both?
Let it be both! Restrict what is is ALLOWED to ADJUST, but manage it all.
When I do a shoot, I start out with 300-1000 RAW images.
By the time I am finished with the client I will have narrowed the pile of image assets down to 20% of the original RAW files, JPGs of all of those, AND 10% of the selected originals will also end up as layered PSD/TIFF files. But when I go back in and want to open a PSD file then why not allow just that Then save it back as the same or a version..
OR when I tell it to open a PSD file in CS2 it will as "make a version(full or flat), or edit original?" and of course a "don't ask me again for this format" check box!
So why not leave the layered PSD/TIFF files alone and NOT allow them to be adjusted, only managed?
Think about it, IF I want to adjust my multilayered PSD file I will most definitely only trust that to CS2 where I retain full creative control over the layers, and I can output a version if I want!
IF I am crazy enough to feel the need to use the shallow tools of Aperture to tweak the brightness/level/curves/saturation/WB/etc/etc of a flattened COPY of my layered PSD file then I will make a copy and drag it into Aperture. By shallow I mean that CS2 and layers has much more control than the single layer tools of Aperture.
As an option leave it open as a configuration choice in the prefs... "Adjustable image formats: RAW, JPG" and even on a per image basis, right-click--> image status-->Adjustable or Unadjustable.
Yes, I sent this idea off to Apple!Moki,
Right there with you. Here's my rough scenario:
1. I shoot 500 RAWs.
2. Cull down to 100 keepers.
3. Client culls down to their 50 selects.
4. Anywhere from 25-50 become layered PSD files, depending on the presentation the client wants (sizes, packages, borders, special effects, etc.).
5. In addition to those 25, there may be *completely new* composites (PSD's) created from combinations of the 25 -- this isn't a version of one of the original masters -- it is a *completely new* image that Aperture needs to manage in the project (so import of PSD without flattening and/or allowing Photoshop to Save-As into the Aperture library is a must).
I'd even add to your request about allowing adjustments. It would be totally cool if the adjustment tools were context-sensitive, enabled when working with an image (like RAW or JPG) where Aperture can adjust, and disabled when working with an image (like layered PSD or TIFF) that it won't adjust.
Anyway, I have complete confidence that eventually, the RAW conversion will be fixed, so even though that's an immediate glaring issue, it doesn't seem to me to be the long-term workflow killer. Being able to traverse this editing stage of the workflow with Photoshop is crucial. Without this, its basically
a complete project export after the initial organization stage, and the rest of the workflow is either managed entirely in another tool, or a complete pain to bring back into Aperture for output, which leaves files strewn everywhere for backup and archiving.
Brad -
How to enable repetitive values in report.
hi,
I dont want to supress values in my report. my report is being displayed in following way.
plant price
1000 200
blank 300
blank 400
2000 500
blank 600.
But i want to display the report in following way. I want plant (characteristic)value to be displayed in every row instead of one time in first occurence.
plant price
1000 200
1000 300
1000 400
2000 500
2000 600.
Is this possible in BW 3.5?
Thank you in advance.thank you very much for your reply. But here plant is not a key figure. I have unchecked 'hide repeated key values' option. but I did not get the required result.
I want the plant values (char) to be displayed in each row. As of now, plant values are being displayed only one time (only first occurance).
pls reply. points assigned. -
One to Many data model solutions
Hello Friends,
Recently I got a requirement where I need to join two line item DSO based on a Reference key, but it is with One to many relationship. I hestitate to use the Infoset because its a very huge DSO, which will affect the report performance.
I even thought of updating one DSO data to the infocube by getting the lookup values from other DSO in the transformation. I am stuck in every road I take to fulfill this requirement.
Can anyone share your ideas and thought please...... It will really helpfull..... Thanks for your time.....Hello Ganesh,
Thanks very much for your answer..... I will explain my issue below:
DSO A: ( Delta load)
Company_code G/L_acct Doc_number Reference_Key Invoiced Amount
1000 1009673 767787 100008 100$
DSO B: (Full Load monthly)
Company_code Reference_Key Reference_Key_itm Sold_to Tax_paid_amt
1000 100008 010 A 200$
1000 100008 020 B 300$
1000 100008 030 C 400$
1000 100008 040 D 500$
DSO C:
Company_code Reference_Key Reference_Key_itm Sold_to Tax_paid_amt G/L_acct Invoiced Amount
1000 100008 010 A 200$ 1009673 100$
1000 100008 020 B 300$ 1009673 100$
1000 100008 030 C 400$ 1009673 100$
1000 100008 040 D 500$ 1009673 100$
If I load DSO B - - > DSO C , and the do look up on DSO A: Then I will miss the delta changes which happen in the DSO A
Can I load both DSO A and DSO B to DSO C?
Edited by: Ram on Nov 24, 2009 6:55 PM -
My wishlist after 1 year with Lightroom
I have used LR for the past year as my primary workflow tool and have grown to really appreciate the productivity it delivers. However, we all know there is room for improvement. So I thought I would take a little time to read through the feature requests in this forum, consider my own needs, and assemble a list of the key features that would really matter for my workflow.
A little background I primarily shoot sports often shooting 300-1000 frames per event. I use LR to select, correct, and crop my keepers then export them to my website for sale. I will spend more time fine tuning some the better images, build galleries and occasionally print. I really havent found a need for slideshow. I now have a little over 40k images in my primary catalog. I am at the point where I need more productivity processing the images, particularly for noise removal and color correction, and better tools for managing a larger volume of images.
I hope you find this useful.
a) Better camera calibration - integrated with LR maybe a built-in tool designed to work with a standard color checker
b) Better noise reduction (Noiseware-like)
c) Previews with NR and sharpening applied
d) Improve auto-toning perhaps with configurable options like auto-levels in PS
e) Some kind of magic tool to help correct color when dealing with cycling vapor lights (I shoot a lot of indoor sports in caves) - maybe skin-tone driven. Am I dreaming?
f) Geometry correction tool in lens corrections
g) Gradient tool or similar for ND-like effect to help fix skies etc..
h) Green-eye removal tool
i) Improved spot removal/healing (make it work consistently when applied to the corner of an image)
j) Grain control to help emulate film look
k) Output sharpening in export USM or similar, ideally with some sort of preview feature to enable you to fine tune settings
l) Output sharpening in web galleries again, USM or similar with some sort of preview
m) Improved export plug-in architecture need a formal approach for combining/chaining plug-ins on export (piglets seem a tad limited and they are cumbersome to install and manage)
n) Lua needs to offer the ability to bind in external DLLs or similar libraries (command-line invocation is too crude and slow, and flashing windows consume focus making it impossible to use another tool while export is running)
o) Softproofing for print output
p) Faster performance when scrolling images in grid mode (Elements can do it, why cant LR?)
q) Better tools for backing-up/archiving sets of folders to offline storage
r) Archive to DVD / burn disc feature
s) Plug-in architecture for image processing components
t) Vignette effect tool that works with crops with the ability to set the center-point manually
u) Ability to move sets of folders in one operationI think Travis has produced a well thought out list.
I agree with most of it, except that I would put somethiings
higher up the list.
(BTW: Thanks to the Adobe team for the antifringing improvements
they actually work and are great!)
I am a pretty intensive user of LightRoom. I only have 20K images
but the flow rate has increased and although I find LR to be very
efficient the limitations are starting to bug me. Hence I am always
hoping for an update that delivers some more functionality.
My List: (Shorter, but things left off do not mean I don't want them!)
1. Need a better starting point than just the raw with invisible camera calibration presets applied.
a. I want a lot more control over the auto application and set up of camera calibration presets. An ISO 100 image is handled way differently to an ISO 800 or 1200.
b. I want an auto toned, colored (something like "Perfectly Clear" used in BibblePro) as my start point. In otherwords I want a way more
intelligent start point then the current fairly crude and invisible
one. (Other auto engines include ColourScience, DXPro). However at
present I prefer the Perfectly Clear. Some of these do things like
move greens and blues to nicer hues, remove green (or other) color
casts, local contrast enhancement, face detection and de-redding
skin tones e.g. ColourScience) etc.
c. The option to plugin 3rd party stuff (in particular global
auto editors) and to set it an intelligent default start point.
(I know the start point won't be right for every image but it
will be for a enough to make it really valuable.)
2. The Sharpening (though improved) and Noise control needs to be
much better. At present the preview does not reflect sharpening
changes accurately.
(I have a v-large, high gamut monitor and sharpening and noise changes
are accurately reflected in BibblePro but not LightRoom).
2. More global image edits need to be available. For example- transformations (e.g perspective, lens distortion correction, ND
gradient exposure adjustments, Chanel mixer, etc).
3. Sometimes the LR controls are too coarse, I would like some way to
quickly (e.g. hit "minus" a few times) reduce the range of these some of the controls. Or maybe better choice of range for some controls
could be based on preset or on adjustable default settings, e.g.
right click brings up gain control on each slider say!?.
4. As a image manager it is a problem that LightRoom has a problem with some large images. LightRoom should simply handle the lot
no questions asked. In particular I have photoshop pano tiffs that
LR complains about. Also it would be great if I LR could process
my BibblePro files. (BiblePro uses ".bib" files in place of ".xml"
to do exactly the same thing as LR.)
5. Collections are way improved, but still need more control over
them. i.e. ability to sort on a number of criteria. Also joining,
dividing, evaluating images independently of ratings in underlying database/catalogue.
Anyway I hope the LightRoom Adobe team keep up the good work. -
Help Needed in Relational logic
Hi
Working in 2008 R2 version.
Below is the sample data to play with.
declare @users table (IDUser int primary key identity(100,1),name varchar(20),CompanyId int, ClientID int);
declare @Cards table (IdCard int primary key identity(1000,1),cardName varchar(50),cardURL varchar(50));
declare @usercards table (IdUserCard int primary key identity(1,1), IDUser int,IdCard int,userCardNumber bigint);
Declare @company table (CompanyID int primary key identity(1,1),name varchar(50),ClientID int);
Declare @client table (ClientID int primary key identity(1,1),name varchar(50));
Declare @company_cards table (IdcompanyCard int primary key identity(1,1),CompanyId int,IdCard int)
Declare @Client_cards table (IdclientCard int primary key identity(1,1),ClientID int,IdCard int)
insert into @users(name,CompanyId,ClientID)
select 'john',1,1 union all
select 'sam',1,1 union all
select 'peter',2,1 union all
select 'james',3,2
Insert into @usercards (IdUser,IdCard,userCardNumber)
select 100,1000,11234556 union all
select 100,1000,11234557 union all
select 100,1001,123222112 union all
select 200,1000,2222222 union all
select 200,1001,2222221 union all
select 200,1001,2222223 union all
select 200,1002,23454323 union all
select 300,1000,23454345 union all
select 300,1003,34543456;
insert into @Cards(cardName,cardURL)
select 'BOA','BOA.com' union all
select 'DCU','DCU.com' union all
select 'Citizen','Citizen.com' union all
select 'Citi','Citi.com' union all
select 'Americal Express','AME.com';
insert into @Client(name)
select 'AMC1' union all
select 'AMC2'
insert into @company(name,ClientId)
select 'Microsoft',1 union all
select 'Facebook',1 union all
select 'Google',2;
insert into @company_cards(CompanyId,IdCard)
select 1,1000 union all
select 1,1001 union all
select 1,1002 union all
select 1,1003 union all
select 2,1000 union all
select 2,1001 union all
select 2,1002;
Requirement :
1. Get the distict Users card details. the reason for using distinct is, user can have same card multiple with different UserCardNumber.
Ex : user can have more than BOA card in the @usercards table with different UserCardNumber. But though he has two BOA card, my query should take one row.
2. After the 1st step, i need to check if any details on @company_cards based on Users companyId.If yes then selct the details from @company_cards. if not select it from @client_cards
In this case we need to make sure that we shouln't have repeated data on @FinalData table.
My Logic:
Declare @FinalData table (IDCard int,CardName varchar(50),CardURL varchar(50))
declare @IdUser int = 100, @ClientID int,@companyID int;
select @ClientID = ClientID,@companyID = CompanyId from @users where IDUser = @IdUser;
insert into @FinalData (IDCard,CardName,CardURL)
Select distinct c.IdCard,c.cardName,c.cardURL from @usercards UC join @Cards C on(uc.IdCard = c.IdCard)
where IDUser=@IdUser;
if exists(select 1 from @company_cards where @companyID = @companyID)
BEGIN
insert into @FinalData(IDCard,CardName,CardURL)
select c.IdCard,c.cardName,c.cardURL from @company_cards cc join @Cards c on(cc.IdCard = c.IdCard) where CompanyId = @companyID
and cc.IdCard not in(select IDCard from @FinalData);
END
ELSE
BEGIN
insert into @FinalData(IDCard,CardName,CardURL)
select c.IdCard,c.cardName,c.cardURL from @client_cards cc join @Cards c on(cc.IdCard = c.IdCard) where ClientID = @ClientID
and cc.IdCard not in(select IDCard from @FinalData);
END
select * from @FinalData;
the logic produces the valid result. Is there any alternative way to achieve this logic. I feel there might be some proper way to query this kind of logic. any suggestion please.
[the sample schema and data i provided just to test. i didn't include the index and etc.]
loving dotnetYou can simply merge the statements like below
Declare @FinalData table (IDCard int,CardName varchar(50),CardURL varchar(50))
declare @IdUser int = 100
;With CTE
AS
Select IdCard, cardName, cardURL,
ROW_NUMBER() OVER (PARTITION BY IdCard ORDER BY Ord) AS Seq
FROM
Select c.IdCard,c.cardName,c.cardURL,1 AS Ord
from @usercards UC join @Cards C on(uc.IdCard = c.IdCard)
where IDUser=@IdUser
union all
select c.IdCard,c.cardName,c.cardURL,2
from @company_cards cc join @Cards c on(cc.IdCard = c.IdCard)
join @users u on u.CompanyId = cc.CompanyId
where u.IDUser = @IdUser
union all
select c.IdCard,c.cardName,c.cardURL,3
from @client_cards cc join @Cards c on(cc.IdCard = c.IdCard)
join @users u on u.ClientID= cc.ClientID
where u.IDUser = @IdUser
)t
insert into @FinalData (IDCard,CardName,CardURL)
SELECT IdCard, cardName, cardURL
FROM CTE
WHERE Seq = 1
select * from @FinalData;
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
SSD and HDD / TimeMachine and CCC
I have a 240GB SSD as my main drive and it had all of my data on it.
Recently I installed the OWC data doubler and put a 500GB HDD in place of the superdrive.
Since then, I copied my "home" folder (under /Users) to the new 500GB HDD.
After this I went into user settings and set the path to the new drive, so new files created will be stored on the HDD instead of the SSD now.
I have yet to delete the DUPLICATE data on the SSD. (in case this method wont work)
My question is about TimeMachine:
Right now I have TWO external drives to back up my data.
Drive 1: 2TB
Partitioned into 1.5TB for TM and the other 500GB for Carbon Copy Cloner.
This drive (1) has been backing up my system since the beginning, Before my "home" folder was moved.
Drive 2: 1TB
Partitioned into two separate 500GB sections.
TM is set up to back up the data on the drive (HDD - which there has been no data to back up as of now)
The other partition is for CCC, which also has not been used yet because there is no data on the HDD (not SSD).
The question is:
Since I moved my "home" folder (or directory, whatever) to the new 500GB HDD, and changed the settings to have all "data" be stored on that drive, will TM back it up as if it were all still on my main SSD?
And if it will NOT back up as a single drive, will the data duplicate in TM? (I dont want to have duplicate backups)
If it will make duplicate backups, would I be able to erase the external drive and start fresh and have it just backup as if it were a single disk? (something I am interested in, especially since I can then encrypt my external drive)
Is it stupid to have 1.5 TB for TM and 500GB for CCC? Should I just partition it into TWO 1TB drives? (Since my 240+500=740GB of space...)
A few more things Im concerned about.
My second external drive has time machine on it for the data on the HDD, but the "home" directory or folder is now stored on the drive and if TM will back it up as if it were a single drive, then TM on the second external drive becomes pointless. (Correct?)
Now I know that making duplicate backups of data is critical, so I suppose I could just have TM set up for the second drive, but also back up the main drive, as if it were the single drive (so it would back up my SSD and HDD, just like my primary external drive)
A good set up would be go partition the SECOND external drive with a 750GB partition and the rest for a 250GB. Then TM would have enough space to back up both internal drives (just like the primary external) and then the 250 could just be a clone of the SSD (using CCC- and this would be a duplicate clone)
So this leaves me with two TM external backups and then two CCC backups.
Please tell me if I left out any details. I know its a lengthy description!
Thanks for taking the time to read and respond.
Greatly appreciated!Hello GNUTup,
Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
I understand you are trying to install a new SSD or HDD in your HP Touchsmart 300-1025 Desktop PC.
First, I am going to do is provide you with the HP Support document: HP Touchsmart 300-1025 Desktop PC, which will walk you through the process of replacing the Hard Drive you currently have. If you require the Hard disk drive mounting cage assembly the part number is 575664-001 and can be obtained from The HP Parts Store. I have not seen anything that limits the type of drive you can install, but I have only seen documentation on HDDs. I am also going to provide you with the HP Support document: Partitioning and Naming Hard Drives (Windows 7) as it is relevant to your computer and since you are replacing the Hard Drive is a great opportunity for you to review a document of this type.
Second, as for your CD-ROM I am providing you with the HP Support document: Replacing the CD/DVD Drive in HP TouchSmart 300-1000 Series Desktop PCs, which again will walk you through the process of changing out your CD-ROM. If you require the Optical disk drive mounting cage assembly the part number is 575663-001 and again can be obtained from The HP Parts Store.
I hope I have answered your questions to your satisfaction. Thank you for posting on the HP Forums. Have a great day!
Please click the "Thumbs Up" on the bottom right of this post to say thank you if you appreciate the support I provide!
Also be sure to mark my post as “Accept as Solution" if you feel my post solved your issue, it will help others who face the same challenge find the same solution.
Dunidar
I work on behalf of HP
Find out a bit more about me by checking out my profile!
"Customers don’t expect you to be perfect. They do expect you to fix things when they go wrong." ~ Donald Porter -
ST22 Short Dump STORAGE_PARAMETERS_WRONG_SET
Hi
We have a NW04 BW 350 system running on Windows 2003 32 Bit/MS SQL 2000 32 Bit.
When I go into RSA --> Monitoring --> PSA it says 'constructing administrators workbench' then fails with the short dump:
<b>STORAGE_PARAMETERS_WRONG_SET</b>
The shortdump reccommends increasing parameters abap/heap_area_dia and
abap/heap_area_nondia, I would have thought you would want to avoid using HEAP memory as this locks the memory into a single work process ?
Looking at the memory consumption in the shortdump :
<b>Memory usage.............
Roll..................... 1709536
EM....................... 108964960
Heap..................... 406948064
Page..................... 57344
MM Used.................. 495709152
MM Free.................. 17059832
SAP Release.............. "640"</b>
EM has only been used for about 100 MB, then it go's to HEAP memory.
Looking at ST02 the system has 4GB EM and only 1.7GB used at 42%, so why would the process only use 100 MB of EM?
ztta/roll_extension is set to default 2GB so it appears EM memory should be utilised more by the work process before going to HEAP memory.
What parameters affect the usage of EM before entering HEAP usage?
Thanks for any advice.Dear friend
kindly see the folowing rsdb/ntab/entrycount Number of nametab entries administrated 20000 30000
rsdb/ntab/ftabsize Data area size for field description buffer 30000 60000
rsdb/ntab/irbdsize Data area size for initial records buffer 6000 8000
rtbb/buffer_length Size of single record table buffers 16383 60000
zcsa/table_buffer_area Size of generic table buffer 64000000 120000000
zcsa/db_max_buftab Directory entries in generic table buffer 5000 30000
zcsa/presentation_buffer_area Size of the buffer allocated for screens 4400000 20000000
sap/bufdir_entries Maximum number of entries in the presentation buffer 2000 10000
rsdb/obj/buffersize Size of export/import buffer 4096 40000
rsdb/obj/max_objects Max. no. of exporting/importing objects 2000 20000
Parameters Description Current value Recommended value
em/initial_size_MB Size of extended memory pool 4096 8192
em/global_area_MB Size of SAP Extended Global Memory (see SAP Note 329021) 96 255
ztta/roll_area Maximum roll area per user context 3000000 16773120
rdisp/PG_SHM Paging buffer size in shared memory 8192 32768
Parameters Description Current value Recommended value
rdisp/wp_ca_blk_no Work process communication blocks 300 1000
rdisp/appc_ca_blk_no Buffer size for CPI-C communications 100 2000
gw/max_conn Max. number of active connections 500 2000
rdisp/tm_max_no Max. number of entries in array tm_adm 200 2000
rdisp/max_comm_entries Max. number of communication entries 500 2000
gw/max_overflow_size Max. swap space for CPIC-requests in the gateway 5000000 100000000
gw/max_sys Max. number of connected clients 300 1000
shailesh -
At new...
Hi Experts,
I have to transfer contents of one internal talbe to another.
1st itab:
<u><b>SNo|Material|Plant|Qty|</b></u>
1000|12345678|USA|100
1000|45457988|USA|200
1000|78458956|USA|300
1000|41235630|USA|400
1000|12345678|CAN|100
1000|45457988|CAN|200
1000|78458956|CAN|300
1000|41235630|CAN|400
1000|12345678|JPN|100
1000|45457988|JPN|200
1000|78458956|JPN|300
1000|41235630|JPN|400
2nd Itab:
<u><b>Plant|Flag|</b></u>
Now for every "new plant" in 1st Itab, I have pass the "plant" to 2nd Itab.
Ultimately I have to see the 2nd Itab like this:
<u><b>Plant|Flag|</b></u>
USA|X
CAN|X
JPN|X
I tried to use "at new" statement in the loop, but it's giving the wrong results.
Can somebody help me?
Thanks
SKHai,
you need to change the Internal table fields order
SNo|Material|Plant|Qty|
1000|12345678|USA|100
1000|45457988|USA|200
1000|78458956|USA|300
1000|41235630|USA|400
1000|12345678|CAN|100
1000|45457988|CAN|200
1000|78458956|CAN|300
1000|41235630|CAN|400
1000|12345678|JPN|100
1000|45457988|JPN|200
1000|78458956|JPN|300
1000|41235630|JPN|400
Plant|SNo|Material|Qty|
USA|1000|12345678|100
now sort your itab by PLANT
then use at new PLANT
it will work.
Regards
Vijay -
Hi,
I have data in the following format.
kunnr vertn dmbtr
1000 70 300
1000 70 400
1000 70 300
1000 71 800
1000 71 500
1000 65 900
1000 65 100
2000 43 450
2000 43 550
2000 43 400
2000 40 100
2000 40 300.
My requirement is that all the 70 shhould be dislayed in one page
71 in another page and 65 another page...I have done that.
The page number,
On change of kunnr page number should again start from 1 ie 1 out of 5(in case there are 5 pages for one kunnr).
I tried to reset the page no to 1 in the "On Change of Kunnr' Event but in vain.
Ive also use On change of Vertn 'New-Page' Since each contract(vertn) should be displayed in different page.
Plz help.
Thanks and regards.Hi Renu,
Use the same AT END OF statement.
Regards
Srimanta -
- How to Display Sales Qty & Value for This Year & Last Year in 2 Columns -
Dear All,
I'm having trouble in extracting the last year figures based date entered. I'm actually would like to create a query where I'm able to know the "TOP 10 item sold based item category". I've created a query, which show the top10 item sold (total quantity & value) for this year, but not able to display the last year figure (quantity). Please advise & thanks for your help and time.
SET ROWCOUNT 10
SELECT T1.ItemCode, T2.ItemName, T3.ItmsGrpNam, SUM(T1.Quantity) as "Total Qty Sold", SUM(T1.TotalSumSy) as "Total Amount"
FROM ODLN T0 INNER JOIN DLN1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN OITM T2 ON T1.ItemCode = T2.ItemCode INNER JOIN OITB T3 ON T2.ItmsGrpCod = T3.ItmsGrpCod
WHERE T0.DocDate >='[%0]' AND T0.DocDate <='[%1]' AND T3.ItmsGrpNam ='[%A]'
GROUP BY T1.ItemCode, T2.ItemName, T3.ItmsGrpNam
ORDER by SUM(T1.Quantity) DESC
I wish to have the output as follow
Item Qty (2008) Qty (2007) Value(2008) Value(2007)
A 300 150 1000 500
B 250 300 800 650
C 100 250 700 550
Currently, My results display:
Item Qty (2008) Value(2008)
A 300 1000
B 250 800
C 100 700
Cheers,
SereneHi,
if you want more flexible, you could try this modified Istvan's query:
SELECT top 10 T1.ItemCode, T2.ItemName, T3.ItmsGrpNam, SUM(T1.Quantity) as "Total Qty Sold",
SUM(T1.TotalSumSy) as "Total Amount" ,
(select sum (r.Quantity) from ODLN h
inner join DLN1 r on h.DocEntry=r.DocEntry
where h.DocDate>='[%4]' and h.DocDate<='[%5]'
and r.ItemCode=T1.ItemCode) '2007 Sold',
(select sum (r.TotalSumSy) from ODLN h
inner join DLN1 r on h.DocEntry=r.DocEntry
where h.DocDate>='[%6]' and h.DocDate<='[%7]'
and r.ItemCode=T1.ItemCode) '2007 Amount'
FROM ODLN T0 INNER JOIN DLN1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN OITM T2 ON T1.ItemCode = T2.ItemCode INNER JOIN OITB T3 ON T2.ItmsGrpCod = T3.ItmsGrpCod
WHERE T0.DocDate >='[%0]' AND T0.DocDate <='[%1]' AND T3.ItmsGrpNam between '[%2]' and '[%3]'
GROUP BY T1.ItemCode, T2.ItemName, T3.ItmsGrpNam
ORDER by SUM(T1.Quantity) DESC
Rgds, -
Hi Guys,
I really only notice this whilst playing games (I use a Logitech Gamepad, which monitors my ping) but I have noticed that at 5 to the hour, every hour, without exception, for approximately 1-2 minutes my ping will shoot up from between 20-50 to 300-1000. I know it's only for a couple of minutes but it can be a real pain when you amass a good score and then you get kicked from a game for too high a ping.
This has been going on for some 3-4 months now (I thought it was just temporary glitch) and tbh I don't really know what to do. It's certainly not the server of the game as I play various games and servers and it happens on any of them, and my mates don't seem to suffer with this anomilly.
Any help/guidance is greatly appreciated.
Running Homehub3
Thanks
MartinAt the moment I am hammering Battlefield Bad Company 2. I play with a couple of mates and it is only me suffering this problem. They have game pads to check their pings and they're saying they're not having this problem at all.
I initially thought it may be a virus on my system sending information (probably paranoia here) and did a re-install of windows but the problem is still here
Maybe you are looking for
-
Hi friends some of the employees has retire/separate in may 2008 but in september their provident fund is coming wage type are /3F3 Er PF contribution =65 /3F4 Er Pension contribution = -65 same scenario is coming for some employees. please advis
-
"Memory could not be allocated for the activation process..."
"...Please close all applications and try again." I recently had to wipe my hard drive and upgrade from Win XP to Win 7. I copied all my program files to an external disk first then copied them back. When I try to run PS, I get the above error mess
-
I have no idea whats wrong but imac 8,1 dosnt show on the list someone help please?!
-
After upgrading iphone to 5.1, syncing is endlessly stuck on syncing notes. How do you fix this?
-
Nano A1320 will not sync,will not hard reset.
This is not mine, it's my 10 year old daughter's. I don't know what she has done, she doesn't know, I'm not yet convinced she has done something to contribute to this. When I plug it into the PC on her Win7 user profile, it sort of indicates it is s