Compression and Index
Hi BW Experts,
I deleted Indexes before loading the data.
Then I compressed the request without Creating the Indexes.
It is taking so much time to compress.
Is this the right procedure? Will it take too long to compress after deleting the index?
Thanks in advance.
Regards,
Anjali
hi anjali
Deletion of indexes , creation of index and then compression is the general procedure
if u r doing a data load in Cube then only its worth deleting and creating index,
As far as I know, for compression there is no significance of index operation as a separate E table will be generated as a result of compression whereas index works with F table only
if u r doing cube load followed by compression then standard steps are as below:
1.Delete cube contents (depends on yr req)
2.Delete index
3. load cube data
4.Compression
5.DB Statistics (u could skip this step if no performance issue)
6.Create Index
Very best method is not include the compression method in the Process chain. Because once the compression is done there is no possiblity to delete the data by requestwise. This may require if there is any error in data extraction.
Drop the index during data extraction because this may cause the performance problem during data loading.
Edited by: Amar on Oct 14, 2008 11:10 AM
Similar Messages
-
Compressing and indexing drive D
Will indexing and compressing Drive D speed up my pc and will it do harm ????
The D partition is the Recovery partition. Do not tamper with this partition.
I am a volunteer. I am not an HP employee.
To say THANK YOU, press the "thumbs up symbol" to render a KUDO. Please click Accept as Solution, if your problem is solved. You can render both Solution and KUDO.
The Law of Effect states that positive reinforcement increases the probability of a behavior being repeated. (B.F.Skinner). You toss me KUDO and/or Solution, and I perform better.
(2) HP DV7t i7 3160QM 2.3Ghz 8GB
HP m9200t E8400,Win7 Pro 32 bit. 4GB RAM, ASUS 550Ti 2GB, Rosewill 630W. 1T HD SATA 3Gb/s
Custom Asus P8P67, I7-2600k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX660 2GB, 750W OCZ, 1T HD SATA 6Gb/s
Custom Asus P8Z77, I7-3770k, 16GB RAM, WIN7 Pro 64bit, EVGA GTX670 2GB, 750W OCZ, 1T HD SATA 6Gb/s
Both Customs use Rosewill Blackhawk case.
Printer -- HP OfficeJet Pro 8600 Plus -
Compression for oracle database and index compression during import of data
Hi All,
I have a query , in order to import into oracle database and also have compression and index compression , do we have some kind of load args for r3load and also do we have to change the tpl file ?Hello guy,
I did this kind of compression within migration project before.
I performed index compress first and then export -> import with table compress.
One thing you should take care, delete nocompress flag from TARGET.SQL (created by program SMIGR_CREATE_DDL, program SMIGR_CREATE_DDL created pure non-compression objects for these considered non-standard tables). For table columns > 255, we should not delete this flag.
Regarding to the index compress in source system, please check the following notes:
Note 1464156 - Support for index compression in BRSPACE 7.20
Note 1109743 - Use of Index Key Compression for Oracle Databases
Note 682926 - Composite SAP note: Problems with "create/rebuild index"
Best Regards,
Ning Tong -
Coalesce or compress this index? what is the best solution in this case?
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biI have executed the following query on a specific index that I suspected to be smashed and got the following result
select
keys_per_leaf, count(*) blocks
from (
select sys_op_lbid (154813, 'L', jus.rowid) block_id,
count (*) keys_per_leaf
from xxx_table jus
where jus.id is not null
or jus.dat is not null
group by sys_op_lbid (154813, 'L', jus.rowid)
group by keys_per_leaf
order by keys_per_leaf;
keys_per_leaf blocks
1 80
2 1108
3 2816
4 3444
5 3512
6 2891
7 2579
8 2154
9 1943
10 1287
11 1222
12 1011
13 822
14 711
15 544
16 508
17 414
18 455
19 425
20 417
21 338
22 337
23 327
24 288
25 267
26 295
27 281
28 266
29 249
30 255
31 237
32 259
33 257
34 232
35 211
36 209
37 204
38 216
39 189
40 194
41 187
42 200
43 183
44 167
45 186
46 179
47 179
48 179
49 171
50 164
51 174
52 157
53 181
54 192
55 178
56 162
57 155
58 160
59 153
60 151
61 133
62 177
63 156
64 167
65 162
66 171
67 154
68 162
69 163
70 153
71 189
72 166
73 164
74 142
75 177
76 148
77 161
78 164
79 133
80 158
81 176
82 189
83 347
84 369
85 239
86 239
87 224
88 227
89 214
90 190
91 230
92 229
93 377
94 276
95 196
96 218
97 217
98 227
99 230
100 251
101 266
102 298
103 276
104 288
105 638
106 1134
107 1152
229 1
230 1 This is a 5 columns unique key index on (id number, dat date, id2 number, dat2 date type number).
Furthermore, a space analysis of this index using dbms_space.space_usage gives the following picture
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 75 -------> total bytes = ,5859375
Number of Blocks with with at least 50 to 75% free space = 0 -------> Total Bytes = 0
number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 99848 -------> total bytes = 780,0625
Total blocks ______________________________
99923
Total size MB______________________________
799,384It seems for me that this index needs to be either coalesced or compressed.
Then, what would be the best option in your opinion?
Thanks in advance
Mohamed Houri
Edited by: Mohamed Houri on 12-janv.-2011 1:18So let me continue my case
I first compressed the index as follows
alter index my_index rebuild compress 2;which immediately presents two new situations
(a) index space
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 40 -------> total bytes =, 3125
Number of Blocks with at least 50 to 75% free space = 0 -------> total Bytes = 0
Number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 32361 -------> total bytes = 252, 8203125
Total blocks ______________________________
32401
Total size Mb______________________________
259,208meaning that the compress command freed up 67487 leaf blocks and reduced the size of the index from to 799,384 MB to 259,208 MB.
It also shows a relative nice pictue of number of keys per leaf block (when compared to the previous situation)
(b) on the number of key per leaf block
KEYS_PER_LEAF BLOCKS
4 1
6 1
13 1
15 1
25 1
62 1
63 1
88 1
97 1
122 1
123 3
124 6
125 4
126 2
289 4489
290 3887
291 3129
292 2273
293 1528
294 913
295 442
296 152
297 50
298 7
299 1 In a second step, I have coalesced the index as follows
alter index my_index coalesce;which produces the new figure
Number of blocks with at least 0 to 25% free space = 0 -------> total bytes = 0
Number of blocks with at least 25-50% free space = 298 -------> total bytes = 2,328125
Number of Blocks with at least 50 to 75% free space = 0 -------> Total Bytes = 0
Number of blocks with at least 75 to 100% free space = 0 -------> total bytes = 0
Number of full blocks with no free space = 32375 -------> total bytes = 252, 9296875
Total blocks ______________________________
32673
Total size MB______________________________
261,384meaning the the coalesce command has made
(a) 298-40 = 258 new blocks with 25-50% of free space
(b) 32375-32361 = 14 new additional blocks which have been made full
(c) The size of the index increased by 2,176MB (261,384-259,208)
While the number of key per leaf block keeps in the same situation
KEYS_PER_LEAF BLOCKS
4 2
5 3
9 1
10 2
12 1
13 1
19 1
31 1
37 1
61 1
63 1
73 1
85 1
88 1
122 1
123 4
124 4
125 3
126 1
289 4492
290 3887
291 3125
292 2273
293 1525
294 913
295 441
296 152
297 50
298 7
299 1 Could you please through some light on the difference between the compress and the coalesce on the effect they have made on
(a) the number of keys per leaf blocks within my index
(b) the space and size of my index?
Best regards
Mohamed Houri -
DB02 view is empty on Table and Index analyses DB2 9.7 after system copy
Dear All,
I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
a) all standard backgrnd job scheduled in sm36
b) Automatic runstats are enabled in db2 parameters
c) Reorgchk all scheduled periodically from db13 and already ran twice.
4) 'reorgchk update statistics on table all' was also ran on db2 level.
but Run stats staus in db02 was not getting updated. Its empty.
Please suggest.
Regards
VinayHi Deepak,
Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
it's better to export/import with systemcopy.
Also with systemcopy you can use index compression.
After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
and then work with db6conv and online table move to move one tablespace online to the new one.
Best regards,
Joachim -
Compress nonclustered index on a compressed table
Hi all,
I've compressed a big table, space has been shrunk from 180GB to 20GB using page compression.
I've observed that this table has 50GB of indexes too, this space has remanied the same.
1) is it possible to compress nonclustered index on an already compressed table?
2) is it a best practice?ALTER INDEX...
https://msdn.microsoft.com/en-us/library/ms188388.aspx
You saved the disk space, that's fine, but now see if there is some performance impact on the queries, do you observe that any improvement in terms of performance?
http://blogs.technet.com/b/swisssql/archive/2011/07/09/sql-server-database-compression-speed-up-your-applications-without-programming-and-complex-maintenance.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Compression and query performance in data warehouses
Hi,
Using Oracle 11.2.0.3 have a large fact table with bitmap indexes to the asscoiated dimensions.
Understand bitmap indexes are compressed by default so assume cannot further compress them.
Is this correct?
Wish to try compress the large fact table to see if this will reduce the i/o on reads and therfore give performance benefits.
ETL speed fine just want to increase the report performance.
Thoughts - anyone seen significant gains in data warehouse report performance with compression.
Also, current PCTFREE on table 10%.
As only insert into tabel considering making this 1% to imporve report performance.
Thoughts?
ThanksFirst of all:
Table Compression and Bitmap Indexes
To use table compression on partitioned tables with bitmap indexes, you must do the following before you introduce the compression attribute for the first time:
Mark bitmap indexes unusable.
Set the compression attribute.
Rebuild the indexes.
The first time you make a compressed partition part of an existing, fully uncompressed partitioned table, you must either drop all existing bitmap indexes or mark them UNUSABLE before adding a compressed partition. This must be done irrespective of whether any partition contains any data. It is also independent of the operation that causes one or more compressed partitions to become part of the table. This does not apply to a partitioned table having B-tree indexes only.
This rebuilding of the bitmap index structures is necessary to accommodate the potentially higher number of rows stored for each data block with table compression enabled. Enabling table compression must be done only for the first time. All subsequent operations, whether they affect compressed or uncompressed partitions, or change the compression attribute, behave identically for uncompressed, partially compressed, or fully compressed partitioned tables.
To avoid the recreation of any bitmap index structure, Oracle recommends creating every partitioned table with at least one compressed partition whenever you plan to partially or fully compress the partitioned table in the future. This compressed partition can stay empty or even can be dropped after the partition table creation.
Having a partitioned table with compressed partitions can lead to slightly larger bitmap index structures for the uncompressed partitions. The bitmap index structures for the compressed partitions, however, are usually smaller than the appropriate bitmap index structure before table compression. This highly depends on the achieved compression rates. -
OLTP compression and Backupset Compression
We are testing out a new server before we migrate our production systems.
For the data we are using OLTP compression.
I am now testing performance of rman backups, and finding they are very slow and CPU bound (on a single core).
I guess that this is because I have also specified to create compressed backupsets.
Of course for the table blocks I can understand this attempt at double compression will cause slowdown.
However for index data (which of course cannot be compressed using OLTP compression), compression will be very useful.
I have attempted to improve performance by increasing the parallelism of the backup, but I from testing this only increases
the channels writing the data, there is still only one core doing the compression.
Any idea how I can apply compression to index data, but not the already compressed table segments?
Or is it possible that something else is going on?Hi Patrick,
You can also check my compression level test.
http://taliphakanozturken.wordpress.com/2012/04/07/comparing-of-rman-backup-compression-levels/
Thanks,
Talip Hakan Ozturk
http://taliphakanozturken.wordpress.com/ -
What is the difference between Topic Keywords and Index File Keywords?
What is the difference between Topic Keywords and Index File Keywords? Any advantages to using one over the other? Do they appear differently in the generated index?
RH9.0.2.271
I'm using WebhelpHi there
When you create a RoboHelp project you end up with many different ancillary files that are used to store different bits of information. Many of these files bear the name you assigned to the project at the time you created it. The index file has the project name and it ends with a .HHK file extension. (HHK meaning HTML Help Keywords)
Generally, unless you change RoboHelp's settings, you add keywords to this file and associate topics to the keywords via the Index pod. At the time you compile a CHM or generate other types of output, the file is consulted and the index is built.
As I said earlier, the default is to add keywords to the Index file until you configure RoboHelp to add the keywords to the topics themselves. Once you change this, any keyword added will become a META tag in the topic code. If your keyword is BOFFO, the META tag would look like this:
<meta name="MS-HKWD" content="BOFFO" />
When the help is compiled or generated, the Index (.HHK) file is consulted as normal, but any topics containing keywords added in this manner are also added to the Index you end up with. From the appearance perspective, the end user woudn't know the difference or be able to tell. Heck, if all you ever did was interact with the Index pod, you, as an author wouldn't know either. Well, other than the fact that the icons appear differently.
Operationally, keywords added to the topics themselves may hold an advantage in that if you were to import these topics into other projects, the Index keywords would already be present.
Hopefully this helps... Rick -
Different b/w index rebuild and index rebuild online
hi..guys could u plz tel me difference between index rebuild and index rebuild online
There is no difference in both the commands. Both will rebuild the index structure from the scratch.But in the first case with only Rebuild, as long as the index, its temporary segment is not prepared and merged together, index is not available for the other users for use. The Online clause makes the index available for others even while being rebuild.
Rebuilding index online has the same concept of creating them online to some extent,
http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/indexes.htm#3062
HTH
Aman.... -
Ceartion of User Defined Field in EXCHANGE RATE AND INDEXES
Hi Experts,
I want to create User Defined Field in EXCHANGE RATE AND INDEXES.But while creating the UDF from User Defined Field-Management unable to find the table for it.Write now My Client are using SAP B1 2007 Ptach-08.Is there any way out to create user defined field in EXCHANGE RATE AND INDEXES.
Plz help me out on this issue.
with regards,
Pankaj K and Kamlesh NPankaj,
When you do the Manage User Fields area to define a UDF, all the possible areas where UDF's can be created in B1 is listed. You would be able to create UDF's only on these.
Suda -
Table files and Index files 2GB on Windows 2003 Server SP2 32-bit
I'm new to Oracle and I've ran into the problem where my Table files and Index files are > 2GB. I have an Oracle instance running version 10.2.0.3.0. I have a number of tables file and index files that have a current files size of 1.99GB. My Oracle crashes about three times a week because of a "Write Fault/Failure. I've detemined that the RDBM is trying to write a index or table files > 2GB. When this occurs it crashes.
I've been reading the Oracle knowledge base that it suggest that there is a fix or release of Oracle 10g to resolve this problem. However, I've been unable to locate any fix or release to address my issue. Does such a fix or release exist? How do I address this issue? I'm from the world of MS SQL and IBM DB2 and we don't have this issue. I am running and NTFS files system. Could this be issue be related to an Windows Fix?
Surely Oracle can handel databases > 2GB.
Thanks in advance for any help.After reading your response it appears that my real problem has to do with checking pointing. I've included below a copy of the error message:
Oracle process number: 8
Windows thread id: 3768, image: ORACLE.EXE (CKPT)
*** 2008-07-27 16:50:13.569
*** SERVICE NAME:(SYS$BACKGROUND) 2008-07-27 16:50:13.569
*** SESSION ID:(219.1) 2008-07-27 16:50:13.569
ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
OSD-04008: WriteFile() failure, unable to write to file
O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
error 221 detected in background process
ORA-00221: Message 221 not found; No message file for product=RDBMS, facility=ORA
ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
OSD-04008: WriteFile() failure, unable to write to file
O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
Can you tell me why I'm having issues with checking point and the control file?
Can I rebuild the control file if it s corrupt?
The problem has been going on since April 2008. I'm takening over the system.
Thanks -
Best practice for PK and indexes?
Dear All,
What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
Thanks in advance..
Zia ShareefWell, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
There are 5 common fields in between invoices(reading) and collection tables
YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
My one of batch process has following update and it is VERY VERY SLOW:
UPDATE reading r
SET bamount (SELECT sum(camount)
FROM collection cl
WHERE r.ryear = cl.byear
AND r.rmonth = cl.bmonth
AND r.area_code = cl.area_code
AND r.cons_code = cl.cons_code
AND r.adtl = cl.adtl)
WHERE area_code = 1
tentatively area_code(1) is having 20,000 consumers
each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance? -
'unable to connect' and index.php
Hi.
I am developing a Web Site and index.php is my point of entry.
Document Root Library/WebServer/Documents
so my path is: Library/WebServer/Documents/dwwdSite
httpd.conf file is modified to add index.php and have it listed first.
<IfModule dir_module>
DirectoryIndex index.php index.html
<IfModule>
Troubleshooting:
I was using Netbeans IDE and when I ran index.php it opened in the browser.
When I launched 'any' of my index.php files from Netbeans IDE, they opened correctly in the brower
I am NOW using DreamweaverCC and when I run the index.php Error Message ' Unable to Connect'.
For the last 2 days I have been working on this and I am completely stuck.
This morning I thought of another way to test the 'unable to connect' error.
I decided to copy this same file into Netbeans IDE and I NOW get the same Error Message ' Unable to Connect'
when running index.php from Netbeans.
Somehow, my settings are not correctly configured anymore.
Here are my screenshots from Dreamweaver > manage sites.
I believe that this is a rather simple fix that I am somehow not seeing.
Maybe some can spot some mistake.
I appreciate your help and explanation.Hi Sudarshan.
You have been very kind and very clear in your explanation.
One of the very best that I have ever communicated with on this forum !
I have checked many, many things.
I wanted to make certain that I killed apache and restarted it.
I do not think it is RUNNING at all.
1.
myNameMacBookPro:~ myName$ pwd
/Users/myName
2.
myNameMacBookPro:~ myName$ ps -ax | grep http
1892 ttys000 0:00.00 grep http
3.
myNameMacBookPro: myName$ hostname
local
4.
myNameMacBookPro:etc myName$ cd apache2
reginaMacBookPro:apache2 myName$ ls
extra httpd.conf.pre-update mime.types other
httpd.conf magic original users
5.
myNameMacBookPro:apache2 myName$ sudo nano httpd.conf
myNameMacBookPro:apache2 myName$ sudo apachectl -k restart
Syntax error on line 1 of /private/etc/apache2/users/myNameBU.conf:
Invalid command '{\\rtf1\\ansi\\ansicpg1252\\cocoartf1187\\cocoasubrtf370', perhaps misspelled or defined by a module not included in the server configuration
httpd not running, trying to start
myNameMacBookPro:apache2 myName$
6.
The above code may be a hint at part of the problem.
I created myNameBU.conf as a backup when I was editing the file.
QUESTION. Why is /private/etc/apache2/users/myNameBU.conf:
being referenced above ?
7.
I scanned my ports from preferences and found these two which I believe are what you stated they should be.
port 80 is html
port 8080 is html-alt
8.
I have modified this line and tried it both ways restarting apachectl each time.
AllowOverride None
AllowOverride All
9.
And here is a part of my httpd.conf.
excerpts.
httpd.conf
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
User _www
Group _www
# DocumentRoot: The directory out of which you will serve your
# documents. By default, all requests are taken from this directory, but
# symbolic links and aliases may be used to point to other locations.
DocumentRoot "/Library/WebServer/Documents"
# Each directory to which Apache has access can be configured with respect
# to which services and features are allowed and/or disabled in that
# directory (and its subdirectories).
# First, we configure the "default" to be a very restrictive set of
# features.
<Directory />
Options FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
</Directory>
10.
I took a look at this too.
usr/bin
#path to httpd binary including options if necessary
HTTPD = "usr/sgin/httpd"
# pick up any environmental variables if test - f /usr/sbin/envvars; then ./usr/sbin/envvars
fi
STATUSURL = "http://localhost:80/server-status
11.
Just a reminder.
I started writing .php scripts with Netbeans IDE.
The programs ran: I got output in the brower. Things worked just fine !
I started writing .php scripts with DreamweaverCC.
The programs NEVER ran.
I have always gotten 'Unable to Connect"
Firefox/Safari can't establish a connection to the server at localhost.'
12.
QUESTION.
At one point I was on the phone with a member of the Adobe Technical Support Team.
They connect to my desktop (only) remotely. I am very cautious about this.
Could something have been inadvertently changed when they did this ?
They do need to connect through a PORT - yes ?
This is very frustrating and I am loosing days of work.
I want to get back to Web Development
I love Adobe products but this should not be such a huge obstacle.
When I called Technical support (after much troubleshooting individually and on this forum)
the fellow told me that the ONLY support FTP - not LOCALHOST.
This makes no sense. People develop 'locally' then put it ftp it in the production stage.
Again, I appreciate your assistance. -
'unable to connect' and 'localhost' and index.php and dreamweaverCC
Hi.
I am developing a Web Site and index.php is my point of entry.
Document Root Library/WebServer/Documents
so my path is: Library/WebServer/Documents/dwwdSite
httpd.conf file is modified to add index.php and have it listed first.
<IfModule dir_module>
DirectoryIndex index.php index.html
<IfModule>
Troubleshooting:
I was using Netbeans IDE and when I ran index.php it opened in the browser.
When I launched 'any' of my index.php files from Netbeans IDE, they opened correctly in the brower
I am now using DreamweaverCC and when I run the index.php Error Message ' Unable to Connect'.
For the last 2 days I have been working on this and I am completely stuck.
This morning I thought of another way to test the 'unable to connect' error.
I decided to copy this same file into Netbeans IDE and I NOW get the same Error Message ' Unable to Connect'
when running index.php from Netbeans.
Somehow, my settings are not correctly configured anymore.
Here are my screenshots from Dreamweaver > manage sites.
I believe that this is a rather simple fix that I am somehow not seeing.
Maybe some can spot some mistake.
I appreciate your help and explanation.Site window settings.
Site Name: dwwdSite
Local site folder: /Library/WebServer/Documents/dwwdSite
Server window settings.
Server Name: testing Server
Address: Macintosh HD/Library/WebServer/Documents/dwwdSite
Connect using: Local/network
Testing: yes (checked)
Server folder: /Library/WebServer/Documents/dwwdSite
(I also tried this: Server folder: /Library/WebServer/Documents)
Web URL: http://www.localhost/dwwdSite
Server Advanced tab: (within server window settings)
Testing server: PHP MySQL
Advanced Settings window.
Local info: Web URL: http://www.localhost/dwwdSite
Enable cache: yes (checked)
Maybe you are looking for
-
HP Laserjet1200 Printer and Windows 7
I have a HP Laserjet 1200 which I used when I was in the mortgage business and it's still a perfectly GREAT printer however I recently aquired an HP Proboiok Laptop which has Windows 7 on it. I have tried downloading drivers and everything else I knp
-
Can i insert the custom button on JOptionPane
sir i want to insert the custom button instead of default buttons means i want to insert the icon button of ok and cancel. plz guide me with line of codes thanks in advance
-
MDP for audio only (on iMac mid2010)
Hi I wanna buy a iMac 27": 2,8GHz (so it's mid 2010). The information i already gathered: Audio through HDMI is possible (at least with the proper Adapter). The issues with 10.6.4 are solved by 10.6.5. TOSLink has a limited bandwidth and doesn't work
-
When i use the mesh tool in a filled object, i can only create a gradient mesh on the anchor points or along the path, not anywhere inside the object. Is there any solution to fix it? Thx
-
Hi All If a Approver reports that he is not receiving E mails but able to see her workitems in Approver inbox what could be the problems? 1. If approver is not at all receiving mails what we can do? 2. If approver not receiving mails for certain work