vendredi 27 mars 2015

Reporting services - High availability setup - SQL server 2012


I need some guidance on best practices and setup for SSRS. I have a SQLServer 2012 setup with data source on 1 instance and SSRS on a separate server. There is a need to setup HA and I understand what is involved with the setup for the data source server. This is a large instance with many databases.


The SSRS instance is very small (350 MB). Should I setup AOAG for the report server db? OR configure a scale-out deployment? The goal is for smooth transition should one server go down. I am not very familiar with SSRS i.e setup beyond report db - the report manager etc. Would failover cluster manager take care of the rest. I will have to guide the application team with this.


Getting some input as to the direction I need to look and research will be gratefully appreciated.


Thank you, Debbie.





How do you repair a 'Suspect' database?


My database has gone offline, it is highlighted as 'Suspect'.


Foolishly, I haven't got a recent back up.


Is there a way of quickly restoring the database?


I'm using MS SQL-Server (2008).





Moving DB Log destination?


I currently have SQL DB and Log files going to the same drive. I would like to move the Log files to their own drive. Is there an easy way to achieve this and will it require downtime/reboot of the server?





What will be the Better database design for following table?


I am designing database schema for bulk mail system.


campaign table is used to store the campaign details.



Table Campaign
-------------------------------------------------------------------------------
id | subject | message | status | sent | opened | bounced |senttime | User_id
-------------------------------------------------------------------------------


campaign_log table will used to store the every recipents status like he opens mail or mail bounces for this email or sent etc.



Table Campaign_log
--------------------------------
id | campaign_id | email |status
---------------------------------


My questions is



  1. should I make the columns in table campaign for storing the count of sent,opened and bounced mails

  2. Should I make the another table for storing the statistical data like sent,opend and bounced mail ? can it improve a performance.

  3. I like to explain the second question to describe my doubt (may be it silly stupid idea) I am explaining my doubt with above example. In campaign table id,subject message fields are filled by the user while sent, opened and bounced columns are fills after sending the campaign. Is there more unused column at the inserting or updating affects the performance I mean to say when inserting the campaign, the sent opened and bounced are just like unused we are not filiing them

  4. I read about OLTP AND OLAP databases. But I am confuse about my application nature. because there is because my database is going to store the thousands of row in tables such as mailing list also there is more updates operation will be performs and it's also have the reporting, analysis system.


and when updatingthe count for each column sent opened and bounced other columns are unused.


I am sorry I think it's very silly but I am beginnig to database design and have the doubt and I want to clear it.





Can I setup database triggers to run on a defined secondary set of servers only?


This might sound a bit off, but here's what I have been thinking for a while now:


Use Case


You want to build an activity log for each user action on your application using database (postgreSQL) triggers on every relevant table, outputting to an activity_log table. The triggers should do the trick, but how do we eliminate the burden of every user action triggering an action on the production servers, delaying the whole application?


Purposed Architecture


What I have in mind is a complex structure where one or more secondary postgres nodes would take the entire activity_log trigger activity. The triggers would be disabled on all primary nodes (the ones the application reads and writes to) and enabled on some/all secondary nodes (let's call them "workers"). Data would be written to a primary server (no trigger runs) and replication would forward it to all other nodes. When a "worker" node get's the data, the triggers process it and update the activity_log. activity_log table should be replicated across ALL servers which means that a "worker" node should be able to read, write and send selected data upstream.


Is there anything even close to this? Is this even possible to achieve without having to rewrite a replication model from scratch?





Change MySQL master to a slave


I have a MySQL master/slave configuration



-------- --------
| master | -> | slave1 |
-------- --------


I want to kill my current master, transform slave1 to be the new master and have another slave pointing to slave1



-------- ------------ --------
| master | | new master | -> | slave2 |
-------- ------------ --------
KILLED


What is the best way to do this without database downtime? I am using MariaDB 10.0.17





How to create a communication link failure in MySQL


For testing purposes I would like my MySQL server to generate a communication link failure (com.mysql.jdbc.exceptions.jdbc4.CommunicationsException). How can I tweak MySQL that this error is easily created?





mysql - order by first condition in where clause


I have a table consists of the following fields



id, name


with the following values



1, ciro
2, test ciro
3, ciprox
4, other
5, other


i would like to get all the values that begin with "ci" or contain "ci" but show me the results before they start with ci and then the rest.


the query



select * FROM table WHERE name like '%ci' || name like '%ci%' order by name


I returns



1, ciro
2, test ciro
3, ciprox


I want



1, ciro
3, ciprox
2, test ciro


therefore before the rows in which name begins with "ci" and then those that contain


is there a way without using two queries to obtains the result?





Permissions for executing subscription in SSRS


I'd like to execute an SSRS subscription after some ETL work run by a SSIS package that takes a varying amount of time completes. I'm considering using this method, which basically consists of SSIS calling the same stored procedure that the SQL Server Agent jobs for subscriptions call to execute a subscription.


However, I'm not sure what the best way of managing permissions to execute the subscription would be. For example, I'm reluctant to add users to the RsExecRole because it's discouraged by Microsoft in some documents, and I'm reluctant to grant object-level permissions to execute the ReportServer.ado.AddEvent procedure because they'll likely get overlooked in any migrations. (Edit: I suppose we could work around this by having something execute the Agent job instead of calling the stored proc.)


I'm also hoping to avoid issues with the the code that executes the subscription breaking when one of the GUIDs changes.


How do SSRS professionals manage permissions for this type of request? Alternatively, is there a simpler method that bypasses these issues?





Can't get Dell PE T420 (Perc H710) perform better than a MacMini with PostgreSQL


It's months that I'm trying to solve a performance issue with PostgreSQL.


SYSTEM CONFIGURATION


Our deployment machine is a Dell PowerEdge T420 with a Perc H710 RAID controller configured in this way:



  • VD0: two 15k SAS disks (ext4, OS partition, WAL partition, RAID1)

  • VD1: ten 10k SAS disks (XFS, Postgres data partition, RAID5)


This system has the following configuration:



  • Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-48-generic x86_64)

  • 128GB RAM (DDR3, 8x16GB @1600Mhz)

  • two Intel Xeon E5-2640 v2 @2Ghz

  • Dell Perc H710 with 512MB cache (Write cache: "WriteBack", Read cache: "ReadAhead", Disk cache: "disabled"):

    • VD0 (OS and WAL partition): two 15k SAS disks (ext4, RAID1)

    • VD1 (Postgres data partition): ten 10k SAS disks (XFS, RAID5)



  • PostgreSQL 9.4 (updated to the latest available version)

  • moved pg_stat_tmp to RAM disk


My personal low cost and low profile development machine is a MacMini configured in this way:



  • OS X Server 10.7.5

  • 8GB RAM (DDR3, 2x4GB @1333Mhz)

  • one Intel i7 @2.2Ghz

  • two Internal 500GB 7.2k SAS HDD (non RAID) for OS partition

  • external Promise Pegasus R1 connected with Thunderbolt v1 (four 1TB 7.2k SAS HDD 32MB cache, RAID5, Write cache: "WriteBack", Read cache: "ReadAhead", Disk cache: "enabled", NCQ: "enabled")

  • PostgreSQL 9.0.13 (the original built-in shipped with OS X Server)

  • moved pg_stat_tmp to RAM disk


So far I've made a lot of tuning adjustments to both machines, including kernel reccomended ones on the official Postgres doc site.


APPLICATION


The deployment machine runs a web platform which instructs Postgres to make big transactions over billion of records. It's a platform designed for one user because system resources have to be dedicated as much as possible to one single job due to data size (I don't like to call it big data because big data are in the order ob ten of billion).


ISSUEs


I've found the deployment machine to be a lot slower than the development machine. This is paradoxal because the two machine really differs in many aspects. I've run many queries to investigate this strange behaviour and have done a lot of tuning adjustments.


During the last two months I've prepared and executed two type of query sets:



  • A: these sets make use of SELECT ... INTO, CREATE INDEX, CLUSTER and VACUUM ANALYZE.

  • B: these sets are from our application generated transactions and make use of SELECT over the tables created with set A.


A and B were always slower on T420. The only type of operation that was faster is the VACUUM ANALYZE.


RESULTS


A type set:



  • T420: went from 311seconds (default postgresql.conf) to 195seconds doing tuning adjustments over RAID, kernel and postgresql.conf;

  • MacMini: 40seconds.


B type set:



  • T420: 141seconds;

  • MacMini: 101seconds.


I've to mention that we have also adjusted the BIOS on T420 setting all possible parameters to "performance" and disabling low energy profiles. This lowered time execution over a type A set from 240seconds to 211seconds.


We have also upgrade all firmware and BIOS to the latest available versions.


Here are two benchmarks generated using pg_test_fsync:


T420 pg_test_fsync



60 seconds per test
O_DIRECT supported on this platform for open_datasync and open_sync.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 23358.758 ops/sec 43 usecs/op
fdatasync 21417.018 ops/sec 47 usecs/op
fsync 21112.662 ops/sec 47 usecs/op
fsync_writethrough n/a
open_sync 23082.764 ops/sec 43 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 11737.746 ops/sec 85 usecs/op
fdatasync 19222.074 ops/sec 52 usecs/op
fsync 18608.405 ops/sec 54 usecs/op
fsync_writethrough n/a
open_sync 11510.074 ops/sec 87 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 21484.546 ops/sec 47 usecs/op
2 * 8kB open_sync writes 11478.119 ops/sec 87 usecs/op
4 * 4kB open_sync writes 5885.149 ops/sec 170 usecs/op
8 * 2kB open_sync writes 3027.676 ops/sec 330 usecs/op
16 * 1kB open_sync writes 1512.922 ops/sec 661 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 17946.690 ops/sec 56 usecs/op
write, close, fsync 17976.202 ops/sec 56 usecs/op

Non-Sync'ed 8kB writes:
write 343202.937 ops/sec 3 usecs/op


MacMini pg_test_fsync



60 seconds per test
Direct I/O is not supported on this platform.

Compare file sync methods using one 8kB write:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 3780.341 ops/sec 265 usecs/op
fdatasync 3117.094 ops/sec 321 usecs/op
fsync 3156.298 ops/sec 317 usecs/op
fsync_writethrough 110.300 ops/sec 9066 usecs/op
open_sync 3077.932 ops/sec 325 usecs/op

Compare file sync methods using two 8kB writes:
(in wal_sync_method preference order, except fdatasync
is Linux's default)
open_datasync 1522.400 ops/sec 657 usecs/op
fdatasync 2700.055 ops/sec 370 usecs/op
fsync 2670.652 ops/sec 374 usecs/op
fsync_writethrough 98.462 ops/sec 10156 usecs/op
open_sync 1532.235 ops/sec 653 usecs/op

Compare open_sync with different write sizes:
(This is designed to compare the cost of writing 16kB
in different write open_sync sizes.)
1 * 16kB open_sync write 2634.754 ops/sec 380 usecs/op
2 * 8kB open_sync writes 1547.801 ops/sec 646 usecs/op
4 * 4kB open_sync writes 801.542 ops/sec 1248 usecs/op
8 * 2kB open_sync writes 405.515 ops/sec 2466 usecs/op
16 * 1kB open_sync writes 204.095 ops/sec 4900 usecs/op

Test if fsync on non-write file descriptor is honored:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 2747.345 ops/sec 364 usecs/op
write, close, fsync 3070.877 ops/sec 326 usecs/op

Non-Sync'ed 8kB writes:
write 3275.716 ops/sec 305 usecs/op


This conferms the hardware IO capabilities of T420 but doesn't explain why MacMini is MUCH MORE FAST.


Any ideas?





Treat database field as a flat file source


We have a partner sending us csv data to a web service of ours that gets sent to a messaging system that is SQL Server based.


I am writing data to disk and then importing to another database by using this file as a flat file source in an SSIS Data Flow task.


Although this works I'd like to avoid using the file system and treat the database field as a flat file source.


Is this possible?


I'm very new to SSIS packages and searching for this scenario is very difficult because of the number of hits just relating to importing a csv file.





can we create a user defined groups with different users and different schemas in oracle11 g


i want to create a group of different users in which they are able access different database .To that particular group i need to give limitations for accessing to the database. can any one help to resolve this issue? thanks in advance





Need a table schema for general store app


I am Developing one application for online general store and stuck some where near making a schema for the products. Here, Every Product has brands and Product can come in multiple varieties like for e.g Ice-cream which comes in multiple flavors and every flavor has different sizes or units. And here product will have the image for every size of the product.


So, Please can anyone help me to design the schema for this. i.e db structure for this





SQL Server execution plan using a view, but no view is inside the stored procedure


I am troubleshooting this slow stored procedure and when checking the actual execution plan I noticed a warning of a possible missing index. Taking a closer look, it's against one of the views, not an actual table. How can that be possible?


Of course, the physical table is being used but the actual view it is not!


The view is something like this:



CREATE VIEW [ws].[MyVIEWTable]
WITH SCHEMABINDING
AS
SELECT col1, col2
FROM [dbo].[MyTable]


Why does the SQL Server engine use the view to retrieve data and not the actual physical table, which on this case would be dbo.MyTable?





Error on slave2 after promoting slave1 to master in cascaded streaming replication


I've a setup like MASTER -> SLAVE1 -> SLAVE2. MASTER and SLAVE2 are running on different ports on single server. SLAVE1 in on another server.


For testing purpose I've powered off my first server which holds MASTER and SLAVE1 databases and then promote SLAVE1 to master. When the server was turned on and slave2 was started, I am getting following errors in slave2 logs:



LOG: restarted WAL streaming at 0/13000000 on timeline 4
LOG: replication terminated by primary server
DETAIL: End of WAL reached on timeline 4 at 0/13008B38.


Any idea how to address this issue and continue slave2 replicating from slave1 which is now promoted to MASTER?


Thanks





PgAdmin III - Number of lines returned during insertion


Not crucial question here, just curiosity.


I am inserting data via multiple scripts in a base running in PostgreSQL.


I wanted to test the time requested to end the insert, so I requested many times the number of lines in the table as follow :



SELECT COUNT(*) FROM "table";


I noticed that everytime the number of line is following a schema : last digits (hundreds, tens and units) are the same as the firsts (millions and thousands).


By example :



  • if there was around 39k lines, the number of lines returned was 39039,

  • if there was around 8518k lines, the number of lines returned was 8526518 (8518000 + 8518)


Any idea from where it come? Is it by default? or maybe a personal config?


More info :


Same behaviour on psql.


I am adding 1k inserts by connection to the base, so I don't think it's related.


When the inserts are over, the correct number of lines is returned.


I did enought requests to ensure that's not coincidence.


I am working on Windows.




PS : I haven't found already done post but I could have fail my search, sorry if it's the case.





Viewing execution plans for queries with parameters


I have a sample query generated by an application (Microsoft Dynamics AX 2012 in this case) which is ineffective performance-wise (cross joins, order by etc.) I would like to display its execution plan and attempt to tune it by indexing or rewriting some parts of it. I cannot just copy/paste it into SSMS because there are numerous parameters of many data types. I don't even know what are the values for these parameters.


Is there a way to quickly identify the execution plan of this query? Maybe by querying some DMVs? I got the query text from monitoring software which must have done it.



SELECT 2 AS f1,
T3.RECID AS f2,
T4.RECID AS f3,
T4.GENERALJOURNALACCOUNTENTRY AS f4,
T4.LEDGERDIMENSION AS f5,
Sum(T6.TRANSACTIONCURRENCYAMOUNT) AS f6,
T6.TRANSACTIONCURRENCY AS f7,
T6.MONETARYAMOUNT AS f8,
Sum(T7.ACCOUNTINGCURRENCYAMOUNT) AS f9,
N'aaa' AS DATAAREAID,
1 AS RECVERSION,
5637144576 AS PARTITION,
IDENTITY(bigint, 1, 1) AS RECID
INTO [##ax_tmp_tim99_151_7623]
FROM SUBLEDGERJOURNALENTRY T1
CROSS JOIN ACCOUNTINGDISTRIBUTION T2
CROSS JOIN TAXTRANS T3
CROSS JOIN SUBLEDGERJOURNALACCOUNTENTRY T4
CROSS JOIN ACCOUNTINGDISTRIBUTION T5
CROSS JOIN ACCOUNTINGDISTRIBUTION T6
CROSS JOIN SUBLEDGERJOURNALACCOUNTENTRYDISTRIBUTION T7
WHERE ( ( T1.PARTITION = @P1 )
AND ( ( ( ( T1.TRANSFERID = @P2 )
AND ( T1.LEDGER = @P3 ) )
AND ( T1.TYPE <> @P4 ) )
AND ( T1.TYPE <> @P5 ) ) )
AND ( ( T2.PARTITION = @P6 )
AND ( ( T2.ACCOUNTINGEVENT = T1.ACCOUNTINGEVENT )
AND ( ( ( T2.MONETARYAMOUNT <> @P7 )
AND ( T2.MONETARYAMOUNT <> @P8 ) )
AND ( T2.MONETARYAMOUNT <> @P9 ) ) ) )
AND ( ( ( T3.PARTITION = @P10 )
AND ( T3.DATAAREAID = @P11 ) )
AND ( T3.SOURCEDOCUMENTLINE = T2.SOURCEDOCUMENTLINE ) )
AND ( ( T4.PARTITION = @P12 )
AND ( ( ( ( T4.SUBLEDGERJOURNALENTRY = T1.RECID )
AND ( T4.POSTINGTYPE <> @P13 ) )
AND ( T4.POSTINGTYPE <> @P14 ) )
AND ( T4.POSTINGTYPE <> @P15 ) ) )
AND ( ( T5.PARTITION = @P16 )
AND ( T5.RECID = T2.PARENTDISTRIBUTION ) )
AND ( ( T6.PARTITION = @P17 )
AND ( ( ( T6.SOURCEDOCUMENTLINE = T5.SOURCEDOCUMENTLINE )
AND ( T6.PARENTDISTRIBUTION = T5.RECID ) )
AND ( ( ( T6.MONETARYAMOUNT = @P18 )
OR ( T6.MONETARYAMOUNT = @P19 ) )
OR ( T6.MONETARYAMOUNT = @P20 ) ) ) )
AND ( ( T7.PARTITION = @P21 )
AND ( ( ( T7.SUBLEDGERJOURNALACCOUNTENTRY = T4.RECID )
AND ( T7.ACCOUNTINGDISTRIBUTION = T6.RECID ) )
AND ( ( ( T7.ACCOUNTINGCURRENCYAMOUNT >= @P22 )
AND ( T6.TRANSACTIONCURRENCYAMOUNT >= @P23 ) )
OR ( ( T7.ACCOUNTINGCURRENCYAMOUNT < @P24 )
AND ( T6.TRANSACTIONCURRENCYAMOUNT < @P25 ) ) ) ) )
GROUP BY T3.RECID,
T4.RECID,
T4.GENERALJOURNALACCOUNTENTRY,
T4.LEDGERDIMENSION,
T6.TRANSACTIONCURRENCY,
T6.MONETARYAMOUNT
ORDER BY T3.RECID,
T4.RECID,
T4.GENERALJOURNALACCOUNTENTRY,
T4.LEDGERDIMENSION,
T6.TRANSACTIONCURRENCY,
T6.MONETARYAMOUNT




Mysql cursor alternative / Optimization - each row updates are too slow


I'm looking to optimize my cursor based update or actually replacing it...


Situation


We're carrying out promotional campaigns and I'd like to track user activity per campaign.


Logic


Each campaign is pushed to a specific batch - Segment of our customer base



CREATE TABLE `segments` (
`campaign_id` int(6) DEFAULT NULL,
`customer_id` varchar(20) DEFAULT NULL,
`tracking_start_date` date DEFAULT NULL,
`tracking_end_date` date DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1


The tracking_start_date is the date of the campaign while tracking_end_date is the date tracking should end.


Each campaign has it's own "Call to Action (cta)" which is the transaction type we're pushing and hoping customers will start using after the campaign.



CREATE TABLE `cta` (
`campaign_id` int(11) DEFAULT NULL,
`Date` date DEFAULT NULL,
`segment` varchar(100) DEFAULT NULL,
`message` varchar(320) DEFAULT NULL,
`Size` int(11) DEFAULT NULL,
`cta` varchar(100) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1


By default the tracking_end_date in the segments table is set to last day of the month but I've created a procedure to check and update this field. (Campaign_id's are issued sequentially according to the campaign date and so the earliest campaign has the least campaign_id value and vise versa) Tracking is done on a calender month on month basis.


Update scenario


For each record in the segments table check whether the same customer_id appears in a future campaign and if that campaign with a greater tracking_start_date has the same CTA.


If TRUE: change the tracking_end_date for that record to a day before the new campaign.


If FALSE: retain last day of tracking_start_date month as the tracking_end_date.


If the update is not done, then we would be double/tripple... counting transactions for customers who appear in several campaigns and had the same CTA.


Below is the procedure I'm currently using but the problem is that it's too slow.


These procedure is housed in another one that loops through the campaign_id's for the month then calls this procedure while supplying the relevant campaign_id



CREATE DEFINER=`root`@`localhost` PROCEDURE `set_campaign_end_date_child`(IN var_campaign_id INT)
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE var_customer_id VARCHAR(20);
DECLARE var_tracking_start_date DATE;
DECLARE cur1 CURSOR FOR SELECT DISTINCT customer_id FROM segments WHERE campaign_id =var_campaign_id;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
-- perform cursur update loop now
OPEN cur1;
read_loop: LOOP
IF done THEN
LEAVE read_loop;
END IF;
FETCH cur1 INTO var_customer_id;
SELECT DISTINCT DATE INTO var_tracking_start_date FROM cta WHERE campaign_id = var_campaign_id;
UPDATE segments SET tracking_end_date =
(SELECT IFNULL(DATE_SUB(MIN(tracking_start_date),INTERVAL 1 DAY),LAST_DAY(var_tracking_start_date)) FROM segments_temp
WHERE customer_id = var_customer_id
AND campaign_id
IN(SELECT campaign_id FROM cta WHERE cta IN (SELECT cta FROM cta WHERE campaign_id = var_campaign_id)
AND campaign_id > var_campaign_id))
WHERE customer_id = var_customer_id AND campaign_id =var_campaign_id ;
END LOOP read_loop;
CLOSE cur1;
END$$

DELIMITER ;


PS: Before initiating the procedure, I make a copy of the segments table in another one called segments_temp and do the comparison from there (this is because MySQL cannot do updates from a self referencing query)


Hope I'm clear & thanks in advance for your ideas





delete query performance issue on mysql


I have two tables table1, table2. Where table2 is a subset of table1(basically created with the query "create table table2 as select * from table1 where..... )


Now, Iam trying to run the below query. But its taking almost 30 minutes to just delete 10 records.


delete from table1 where column1 in (select column2 from table2) order by column1 limit 10


If i remove "order by" clause in the above query its showing performance improvement but i get below warning;


Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. The statement is unsafe because it uses a LIMIT clause. This is unsafe because the set of rows included cannot be predicted.


Please suggest a way improve the performance of the delete query.





Subquery optimization


I was reading through this article - http://ift.tt/14VgixH


I am working with an already existing design.


What I want is to be able to select all id's from table a, which are not in table b.


I am using this query -



SELECT a.* FROM `orders` a LEFT JOIN `orders_corrected` b ON
a.`order_id`=b.`order_id` WHERE b.`order_id` IS NULL;


Table a contain over 900K records and Table b contains over 200K and growing.


The above query takes about 7-8 minutes.


I also tried using NOT IN, which is slow as well.


order_id column is NOT NULL and UNIQUE indexed.


Output of explain:



*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: a
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 595783
Extra: NULL
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: b
type: eq_ref
possible_keys: PRIMARY,order_id_UNIQUE,ix_order_id
key: PRIMARY
key_len: 152
ref: func
rows: 1
Extra: Using where; Not exists; Using index


Any help would be great.





Migrating from 32 bit using pg_basebackup


Has anyone migrated a 32 bit postgres 9.2 database to a 64 bit postgres 9.2?


I'm trying to use pg_basbackup but I'm not sure whether this right approach.


Thanks in advance


Peter





MySQL Workbench: How to detach reverse-engineered EER diagram from existing database?


MySQL Workbench provides the option to 'Create EER Model From Existing Database'. Trying to remove a table from the resulting diagram leads to a dialog asking:



Should corresponding database objects be deleted with the figures?



As the database must be left untouched in any case, my question is now: How to safely detach the EER model / diagram from the underlying database?





Is it possible to get seek based parallel plan for distinct/group by?


An example from this question shows that SQL Server will choose a full index scan to solve a query like this:



select distinct [typeName] from [types]


Where [typeName] has a non-clustered, non-unique ascending index on it. His example has 200M rows but only 76 unique values. It seems like a seek plan would be a better choice with that density (~76 multiple binary searchs)?


His case could be normalized but the reason for the question is that I really want to solve something like this:



select TransactionId, max(CreatedUtc)
from TxLog
group by TransactionId


There is an index on (TransactionId, MaxCreatedUtc).


Re-writing using a normalized source (dt) does not not change the plan.



select dt.TransactionId, MaxCreatedUtc
from [Transaction] dt -- distinct transactions
cross apply
(
select Max(CreatedUtc) as MaxCreatedUtc
from TxLog tl
where tl.TransactionId = dt.TransactionId
) ca


Running just the CA subquery as a Scalar UDF does show a plan of 1 seek.



select max(CreatedUtc) as MaxCreatedUtc
from Pub.TransactionLog
where TransactionID = @TxId;


Using that Scalar UDF in the original query seems to work but loses parallelism (known issue with UDFs):



select t.typeName,
Pub.ufn_TransactionMaxCreatedUtc(t.TransactionId) as MaxCreatedUtc
from Pub.[Transaction] t


Plans for Cross apply, Just the UDF, Using UDF


Rewriting using an Inline TVF reverts it back to the scan based plan.


From answer/comment @ypercube:



select TransactionId, MaxCreatedUtc
from Pub.[Transaction] t
cross apply
(
select top (1) CreatedUtc as MaxCreatedUtc
from Pub.TransactionLog l
where l.TransactionID = t.TransactionId
order by CreatedUtc desc
) ca


Plan using top/order


Plan looks good. No parallelism but pointless since so fast. Will have to try this on a larger problem sometime. Thanks.





master-slave vs masterless architecture


As per Riak's doc


"Most relational databases leverage a master/slave architecture to replicate data. ...


Conversely, Riak uses a masterless system with no single point of failure, meaning any node can serve read or write requests."


What are the advantages of using masterless instead of master-slave?


We can simulate the same masterless behavior in master-slave architecture, where at any time when master is down any slave nodes can become master.


I'm not able to understand the advantages of 1 over another.


I don't know where to ask, Please provide me link where I can ask the que, if this is not correct place to ask.





How do I define a 'flexible' relationship in my table?


I'm working on a large application, with tons of tables and a huge codebase. It's a system in which people have rights to use functionality X and/or functionality Y and/or functionality Z.



  • Functionality X may be a user-related record in a table called dossiers.

  • Functionality Y may be a user-related record in a table called portfolios.

  • Functionality Z may be a user-related record in a table called api-couplings.


There's a bunch more but I'm trying to keep it straight-forward here. The problem I'm facing is that there's no overarching object called Product. A table in which we set the prices, the invoicing interval and what not. Functionalities X to Z are now granted to the user by an admin who just adds a record to those respective tables. That's gonna change; a user will be able to just select a product, pay and use it right away.


How do I setup this table flexibly? How do I refer to those individual tables? An option I do not want is this:



  • id

  • product_name

  • price

  • functionality_x_id (nullable)

  • functionality_y_id (nullable)

  • functionality_z_id (nullable)


That just feels wrong and will have me change the database table every time we add a new product type.


Something I did think about is something like this:



  • id

  • (...)

  • functionality_table (functionality_x, functionality_y, functionality_z)

  • functionality_record_id


That sounds way more flexible. I do lose my relational integrity, but I don't know if that's such a big issue.


Just to sum it up: in the end, the user has to select a product from which a record in any of the applicable tables will get created for him/her and thus grants the user privilege to that functionality.





sys.dm_os_volume_stats problem


Today I have been having quite an interesting issue, which I can't seem to fix. I am getting drive information for a Failover Cluster Instance SQL Server 2012. These drives are Volume Mount Points. For about 40 instances this runs perfectly except for one FCL this does not retrieve the right drive.


The query I use is



select
case when type_desc = 'ROWS' then 'DATA' when type_desc = 'LOG' then 'LOG' else 'Unknown' end as DataOrLog,
mf.database_id,
mf.name,
mf.physical_name,
Drive.volume_mount_point,
Drive.logical_volume_name
from sys.master_files mf
CROSS APPLY sys.dm_os_volume_stats(mf.database_id, mf.FILE_ID) Drive
where mf.database_id not in (1,2,3,4)


The normal result I get for this query is.


NormalSituation


This is the result I get with the problem case.


WrongSituation


What can be the reason that sys.dm_os_volume_stats can't find the right volume?


Is it some wrong configuration at FCL level?


Why does my sys.master_files show the right drive but the cross applied fileID to sys.dm_os_volume_stats not ? How does this connection work with this DM?


I also checked the drive itself and the file is definitely in the right drive.





Compressing partition - what about index?


To save space on disk I want to compress some partitions of my tables:



alter table TEST_TAB move partition TEST_TAB_2014 compress for all operations;


I have also indexes on this partition.

Should I recreate indexes after compression, or is it not necessary?





Unable to get the latest value added to Mysql table based on posted time


SELECT * FROM discussion_comments GROUP BY disc_id ORDER BY posted_date DESC


I have table example like given below:



CREATE TABLE example
(
id int(11),
cname varchar(11),
posted_date date,
posted_time varchar(20)
);


with values like:



INSERT INTO example
VALUES (1,'abc','2015-03-26','04:25 PM');

INSERT INTO example
VALUES (1,'def','2015-03-27','04:30 PM');

INSERT INTO example
VALUES (2,'ghi','2015-03-11','02:25 AM');

INSERT INTO example
VALUES (2,'jkl','2015-03-15','12:25 PM');


and I am trying to get only the latest value added to the table for an id based on posted_date & posted_time fields.


The result I am trying to achieve is:



(1,'def','2015-03-27','04:30 PM')
(2,'jkl','2015-03-15','12:25 PM')


The query I tried is as follows:



SELECT * FROM `example GROUP BY id ORDER BY posted_date DESC


I am not getting the desired result. Where did I go wrong??





Retrieve user Permissions on a database


Please could you provide me query that shows user(John Felix) permissions on a server\database.





Quorum settings in failover clusters


I have situation where I have three servers with in cluster, and quorum voting mode is Node majority.


In start, all three nodes vote with "1", meaning that all three nodes are up. Then, I force shut down of first node which was owner of role, and then my second node take the ownership. But when I look into votes, I realize that first node is down, but also third node is down.


My question is: Why complete cluster configuration is not down, because there is not enough votes?


How can I have less up votes then down votes and still have operational cluster?





Upgrading a database from SQL Server 2005 to 2012 uncovered a syntax error


I recently upgraded from SQL Server 2005 to 2012. During validation however, a bug was discovered.


A certain trigger was coded as follows:



CREATE TRIGGER [dbo].[trigger] on [dbo].[foo]
FOR UPDATE, UPDATE
AS
UPDATE foobar
SET datetime = GetDate()
FROM bar
WHERE foobar.id = bar.id
GO


I can safely execute this (oddly) on SQL Server 2005.


However on SQL Server 2012 it throws (what I would expect) a syntax error.



Syntax error: Duplicate specification of the action "UPDATE" in the trigger declaration.



Why does this not throw a syntax error on SQL Server 2005? My google-fu on this has failed me.


Why does this seemingly work on SQL Server 2005?





Upgrade from SQL Server database server To SQL Server Report Server [on hold]


I'm quite new in SSRS and I have a SQL Server database, until now every report was created at C# code level, with RDLC files implicit in the program.


But now I want to move the reports to a report server, and get the reports from there.


I setup the SSRS but is empty (obviously), and I just don't get it, if I have to pull the data from the Database sever or must be inside the SSRS





MYSQL Get column group with different values on other column


I have the following table:



week status
------- ------
102014 1
102014 1
102014 1
112014 2
112014 2
112014 2
122014 2
122014 0
122014 0
132014 1
132014 1
132014 1
142014 2
142014 2
142014 2


How can I get 122014 which has different values on status?





What is the performance benefit when using a temporary table to update a regular table?


I'm trying to understand what the performance benefit between two different methods of updating multiple table rows.


For the first method we have an explicit update statement for each line to be altered.


i.e.



update my_table set my_field='A' where id=1;
update my_table set my_field='B' where id=2;
update my_table set my_field='C' where id=3;
update my_table set my_field='D' where id=4;
update my_table set my_field='E' where id=5;


For the second method we use a temporary table.


i.e.



create temporary table my_temp_table (id,val);

insert into my_temp_table value (1,'A'),(2,'B'),(3,'C'),(4,'D'),(5,'E');

update my_table t, my_temp_table tt set t.val=tt.val where t.id=tt.id;


I've been told that the second method is more efficient and thus superior. I just don't understand why this is so.


Sure we have 5 queries vs 3 queries but that last query is essentially still doing all those updates, albeit implicitly? Plus it adds in a join too.


So I'm just curious as to why using a temporary table is better? Is it simply 3 is less than 5, and as such, is better?





Benefits of table partitioning on same disk


Maybe is stupid question, but I must ask to be clear with that.


I know that table partitioning is powerful mechanism of speeding data search when partitions is placed on different disks.


What I need to know what I can get if I place partitions on same disk?





amazon rds logs all queries


I have enabled slow query log on Amazon RDS with following configuration.



long_query_time = 0.5
general_log = 0
log_output = TABLE
min_examined_row_limit = 1000
slow_query_log = 1
log_queries_not_using_indexes = 1


to read slow query log , I am using following command from mysql workbench.



select * from mysql.slow_log where db = 'my_database' order by start_time desc;


but when I have seen log table, it shows me lots of queries used by mysql itself. like it shows me "performance_schema" related queries too. I want to filter it with my query only. does It has any option for this?





jeudi 26 mars 2015

How to handle to many inserts?


I have a database which are accessed from remote locations in the world wit very long delays and poor transfer speeds. I currently transfer the data using a transaction since either all or no data should be inserted. Currently the values are inserted one at the time and it takes a total of about 37 seconds for 200 inserts before the commit.


The problem is that there may be 10 sites continuously trying to insert data wit a 2 minutes interval. If they fail they will retry until they succeeded. This adds up to an impossible situation as they will start growing a backlog.


I have understand I can use snapshot as Isolation level to allow reads in parallel. But how should I handle the parallel insets? How do banks and other system do it since they should also get a backlog if they do it in a queue?


EDIT1: I uses MSSQL and on the remote client I uses C# to talk to the MSSQL server.


EDIT2: The inserts are of 10 different types and are they are linked in pairs of two.


EDIT3: The inserts are connected in pairs. The first insert is a generic one which is the same for all and are inserted in to the same table let's call it tabel_A. Then the second insert is specific depending on the type of insert data. The second insert is made to 1 of 10 tables and the data varies greatly but that table also store the row index of the first insert in tabel_A. This is how they are linked.


EDIT4: The transfer speed from a good connection is about 1 second and that is no problem. The problem is the remote part witch takes about 40 seconds. Table_A contains about 1=^7 rows and is well indexed. All other tables is some what smaller and well indexed.





SQL database to oracle migration


Converted model resulted into incomplete script generation as all stored procedures, triggers and views were missing, only table structures were generated and i did not get any error messages for it. I followed below documentation


http://ift.tt/1E8MOz7.


at this step : Generating and Executing the Script to Create the Oracle Database Objects





Heap having high fragmentation


Does high fragmentation on Heap impact query performance? Can I rebuild the heap with ALTER TABLE Rebuild to get rid off the fragmentation?





Access denied to user@localhost


As a root user i did this: GRANT DELETE, SELECT, INSERT, UPDATE, EXECUTE ON dbName.* TO 'user'@'localhost' IDENTIFIED BY 'apassword';


In 'dbName' i created a procedure 'proc_name' with definer set to 'user'@'localhost'.


Similarly i create a function 'func_name' with definer set to 'user'@'localhost'.


Now the issue is, i able to do select func_name(); but i am not able to call the procedure proc_name() when i did :


call proc_name(), i get the error:


ERROR 1044 (42000): Access denied for user 'user'@'localhost' to database 'dbName'


Any pointers to the solution would be appreciated.





Optimize Query in MySQL


I've following tables with respective fields



1) wi_individual_g(ind_id,ind_is_recepient,ind_deleted,ind_district_id,...) - Around 200K data
2) wi_individual_p(ind_id,prg_id,...) Around 250K data
3) wi_training(trn_id,trn_start_date,trn_deleted,trn_beneficiary_type,...) - Around 25K data
4) wi_indv_training(ind_id,trn_id,is_deleted) - Around 450K data
5) wi_district(dst_id,dst_name) - Around 75 data


I was required to define a query to report unique individuals who are participated in the training within the given ranges of training dates (dst_name wise). So, I created following QUERY to fetch the records



SELECT
wi_district.dst_name,
COUNT(DISTINCT (CASE
WHEN wi_training.trn_start_date BETWEEN '2014-07-01' AND '2015-06-30' THEN wi_individual_g.ind_id
END)) AS y3,
COUNT(DISTINCT (CASE
WHEN wi_training.trn_start_date BETWEEN '2013-07-01' AND '2014-06-30' THEN wi_individual_g.ind_id
END)) AS y2,
COUNT(DISTINCT (CASE
WHEN wi_training.trn_start_date BETWEEN '2013-02-01' AND '2013-06-30' THEN wi_individual_g.ind_id
END)) AS y1
FROM
wi_individual_g
INNER JOIN
wi_individual_p ON wi_individual_p.ind_id = wi_individual_g.ind_id
AND wi_individual_g.ind_is_recepient = 'yes'
INNER JOIN
wi_district ON wi_district.dst_id = wi_individual_g.ind_district_id
AND wi_individual_g.ind_deleted = 0
INNER JOIN
wi_indv_training ON wi_indv_training.ind_id = wi_individual_g.ind_id
AND wi_indv_training.is_deleted = 0
INNER JOIN
wi_training ON wi_training.trn_id = wi_indv_training.trn_id
AND wi_training.trn_deleted = 0
AND wi_training.trn_beneficiary_type = 2
AND wi_training.trn_start_date <= '2015-06-30'
GROUP BY wi_district.dst_name


The INDEX has been applied to every fields on ON-CLAUSE and AND-CLAUSE. This query is taking around 4 mins to execute and fetch the records. How can I apply other optimizations to the present query? Please provide me some solutions in MySQL!





Use ID from another mysql database?


I have a mysql database for form submission (written in PHP) data and a mysql database for CRM (using X2CRM) on the same server.


When a form is submitted on my website the POSTed data of the form will be sent to emails as notifications to customers signed up a contract. These emails are of customers stored in the crm database.


I would like to reference customer IDs as a FK with the form submission id on a table in the form submission database.


Is it better to directly access the CRM database with SQL and get insert the customer id accordingly on the form submission table DB as explained here http://ift.tt/1EZV6XG


I would query and use only the customer IDs that have contract status set to active.


Or should I maintain a table in the form submission DB with the required information of the customers (such as name, address email, contract status)? That would obviously mean overhead to maintain the data with the one in CRM?


So best to go with first option? But is it ok to use PK IDs from one DB as FKs in a table on another DB?


I guess with first option I will need to query the CRM database and use the relevant values (ID and email address) for each customer record everytim a form is submitted? What should be the efficient and recommended way to do this?





Applying Quotes Across Multiple Lines


In SQL Server Management Studio you can use the shortcut CTRL + K > C or CTRL + K > U to comment/uncomment lines in bulk.


Is there a similar keyboard function that applies single quotes across a list of values? I have 150 IDs that I have copied from an Excel spreadsheet and I would like to execute a query that uses an IN clause on this list, and I dont want to manually add single quotes and comma around each item.


For example:



SELECT * FROM tbl WHERE someValue IN (
ABC1
ABC2
ABC3
ABC4
)


I want to highlight the values and using a keyboard shortcut turn the list into:



SELECT * FROM tbl WHERE someValue IN (
'ABC1',
'ABC2',
'ABC3',
'ABC4'
)




Creating a database under a different owner


I'm trying to set up a role in Postgres which can a) Create a new Role b) Create a new Database c) Make that new Role the owner of the database d) Have no other rights (as far as is possible!)


I have tried this:



sc_1=# CREATE ROLE tenant_admin CREATEDB CREATEROLE;
CREATE ROLE
sc_1=# CREATE ROLE user1 IN ROLE tenant_admin LOGIN NOINHERIT ENCRYPTED PASSWORD 'xyz';
CREATE ROLE
sc_1=#


Followed by (in another session)



tahaan@Komputer:~/projects/acme-project$ psql -U user1 -h localhost -d postgres
Password for user user1:
psql (9.3.6)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.

postgres=> SET ROLE tenant_admin;
SET
postgres=> CREATE DATABASE "Tenant1" TEMPLATE "tenant_template";
CREATE DATABASE
postgres=> CREATE ROLE "owner1";
CREATE ROLE
postgres=> ALTER DATABASE "Tenant1" OWNER TO "owner1";
ERROR: must be member of role "owner1"
postgres=>


Background: The requirement is to have an automated function that can setup separate databases in a multi-tenant system. My hope is that this function can be performed by a role that does not have too much rights.





Excel Data Connection to MS-SQL using Windows Auth from untrusted machine


We have an Excel Spreadsheet that uses a Data Connection to a MS-SQL database. The database server is joined to our domain, and Windows Authentication is enabled.


Here is a sample of the connection string (with revealing info obfuscated):


Description=DataConnection;DRIVER=SQL Server;SERVER=10.x.y.z;UID=domain_user;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=DatabaseName


And the Authentication Settings are set to "Windows Authentication"


This works perfectly fine from a domain-joined / trusted machine. It doesn't even ask for credentials; it just works flawlessly when I hit Refresh.


It will work from a non-domain / un-trusted workstation, if I use SQL Authentication. But we are trying to move away from that.


Likewise, it will from a non-domain machine using Windows Authentication if I use "runas" to launch Excel with domain credentials, as follows:


"runas /netonly /user:domain\user %path-to-excel%\excel.exe"


However, using "runas" is a kludge, and our employees will almost certainly have issues... even if I were to create batch files to help automate it.


When launching Excel normally (without using "runas") from a non-domain machine, it gives this error when I hit Refresh:


img1


Upon hitting OK, I have the option to un-check "Use Trusted Connection" and manually enter a user/password. I put in a valid user that would be able to connect fine from a domain-joined machine, or that worked fine using "runas". This is the error it gives with that:


img2


At this point, I have not a clue what to do to get Excel to successfully connect with domain credentials from a non-domain machine.


Ideas?





How to use a trigger to only insert one row


I am doing a third year project using an intel galileo that reads an RFID card, and signs a student into a class, and I am using multiple tables.


I have nearly completed it as all reads are been recorded into a table called read_log. What I want to achieve is when a record is entered into this read_log, set a trigger to just insert the card number, which again is been sent this table read_log.


So what i have so far is :



DROP TRIGGER IF EXISTS `test`;

CREATE DEFINER=`root`@`localhost` TRIGGER `test` AFTER UPDATE ON `readlog`
FOR EACH ROW
BEGIN
INSERT into attendence(S_no)

SELECT s.Stu_no from student s
JOIN readLog r ON s.Stu_no=r.Cardid WHERE s.Stu_no=r.Cardid;

END


I can confirm the trigger is working, as once i scan the rfid card over the galileo it inserts into the database, and the trigger triggers but enters the number 29 times, so there is 29 records entered everytime i read.


Can someone provide a tip as to what to look for or how to go about writing a trigger that will only insert one row. I have tried update instead of insert but this does nothing.





Is it possible to get seek based parallel plan for distinct/group by?


An example from this question shows that SQL Server will choose a full index scan to solve a query like this:



select distinct [typeName] from [types]


Where [typeName] has a non-clustered, non-unique ascending index on it. His example has 200M rows but only 76 unique values. It seems like a seek plan would be a better choice with that density (~76 multiple binary searchs)?


His case could be normalized but the reason for the question is that I really want to solve something like this:



select TransactionId, max(MaxCreatedUtc) from TxLog group by TransactionId.



There is an index on (TransactionId, MaxCreatedUtc).


Re-writing using a normalized source (dt) does not not change the plan.



select dt.TransactionId, MaxCreatedUtc
from [Transaction] dt -- distinct transactions
cross apply
(
select MaxCreatedUtc from TxLog tl
where tl.TransactionId = dt.TransactionId
) ca


Running just the CA subquery as a Scalar UDF does show a plan of 1 seek.


select max(CreatedUtc) as MaxCreatedUtc from Pub.TransactionLog where TransactionID = @TxId;


Using that Scalar UDF in the original query seems to work but loses parallelism (known issue with UDFs):



select t.typeName,
Pub.ufn_TransactionMaxCreatedUtc(t.TransactionId) as MaxCreatedUtc
from Pub.[Transaction] t


Plans for Cross apply, Just the UDF, Using UDF


Rewriting using an Inline TVF reverts it back to the scan based plan.





Large ibdata file in MySQL


I am having MySQL database with all my tables as InnoDB with file_per_table config on. But Still I m seeing huge ibdata file ( ~50GB). All tables are having .ibd file as well. This is a machine I have created by taking dump and loading. What is a reason for this.





Unique Index much bigger than the table itself


Pg 9.3 on debian 7.


I have a very huge table in rows, about 178 million (will turn 250 million in a week and then grow 500 thousand a day), but not that huge in size, wich is 20GB, it's a small varchar, some timestamps and integers. I need to prevent a group of 5 columns to be inserted repeated, so I created an unique index on them. The problem is that the index is 34GB now. My questions are: How can an unique index of 5 columns in a table of 9 be bigger than the full amount of data itself? Will this index ever be used, cause I guess it's cheaper for the DBMS to scan the entire table instead. And is there any way to turn this index smaller or maybe have another solution to implement an unique constraint?





How can I map a login to a database using T-SQL (not SSMS)


I'm writing a program that requires me to assign all permissions and everything in code. I have gotten stuck in this part:


enter image description here


I just want to do the equivalent of clicking the little box under "Map" for the msdb database and assigning that user to SqlAgentUser role. I need a user to be able to add/edit SQL Server Agent jobs. I can get the settings right using SSMS but I can't for the life of me figure out how to do it in raw SQL.


I've looked into ALTER LOGIN but I'm not seeing anything that does what I need. I suspect I just don't know the correct terms to Google. I don't normally do this sort of thing.


Any help is much appreciated!





Restore Database SET SINGLE_USER


I am in the process of writing a restore database script. I want to be able to connect as sa and run the restore script.


When I run SET SINGLE_USER does this mean that my login is the only user access to the database? Or how is this determined? I want to use this as I get an error saying other connections are active.


Below is my restore script to date. Are there any improvements I can make to this?



ALTER DATABASE AdventureWorks SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO

ALTER DATABASE AdventureWorks SET READ_ONLY;
GO

RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorks.BAK'
WITH
MOVE N'AdventureWorks_dat' to N'H:\Data\AdventureWorks.mdf',
MOVE N'AdventureWorks_log' to N'E:\Logs\AdventureWorks.ldf',
STATS = 5;

GO

ALTER DATABASE AdventureWorks SET MULTI_USER;
GO

ALTER DATABASE AdventureWorks SET READ_WRITE WITH NO_WAIT;
GO




Slave executing queries with different charset


I need to change a database charset because of this issue and I solved my issue in staging environment with the SQL command



ALTER TABLE `articles` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;


In this way I get the content converted to utf8 and I now can insert utf8 strings in those tables. Now I need to apply that fix on production databases which have a master-slave setup.


I created a latin1 unused table to test the command before doing that on real data. In a replicated database I added a table and some rows



-- On master host
CREATE TABLE `test_table` (
`content` mediumtext NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
-- Insert a utf8 only chars, it produces 'H?rvatistan' initially
INSERT INTO `test_table` (`content`) VALUES ('Hırvatistan');
-- Change table charset
ALTER TABLE `test_table` CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
-- Insert again the same value in master, this time I get the right value saved
INSERT INTO `test_table` (`content`) VALUES ('Hırvatistan');
-- On master I get
SELECT content, HEX(content), HEX('Hırvatistan') FROM test_table;
+-------------+--------------------------+--------------------------+
| content | HEX(content) | HEX('Hırvatistan') |
+-------------+--------------------------+--------------------------+
| H?rvatistan | 483F72766174697374616E | 48C4B172766174697374616E |
+-------------+--------------------------+--------------------------+
| Hırvatistan | 48C4B172766174697374616E | 48C4B172766174697374616E |
+-------------+--------------------------+--------------------------+


So everything has worked as expected. However when I go to the slave host I find a strange thing. All these commands has been executed (all of them are present in master binlog) and the table is now utf8 in slave host as well but the last query return a different result set in slave host:



SELECT content, HEX(content), HEX('Hırvatistan') FROM test_table;
+-------------+------------------------+--------------------------+
| content | HEX(content) | HEX('Hırvatistan') |
+-------------+------------------------+--------------------------+
| H?rvatistan | 483F72766174697374616E | 48C4B172766174697374616E |
+-------------+------------------------+--------------------------+
| H?rvatistan | 483F72766174697374616E | 48C4B172766174697374616E |
+-------------+------------------------+--------------------------+


How that is possible?





How to list files in oracle backupset


I've backupset created with RMAN, and I want to see what this set contains. How to do that?





Toad Error using Proxy Connection


I created a login for a user two days ago to an oracle database and altered another user so this new user could do a proxy connection as that user. Everything worked fine that day and the user was able to connect from Toad using username[proxy_as_user]. Today this same connection gives an error of ORA-6413: Connection not open.. The user can connect as themselves (not proxying) without issue from Toad. They can also connect from sqlplus using the proxy connection without issues, so we know the DB is fine and their oracle client is fine.


Toad Version: 10.6.1.3


Oracle Client: 11.2.0


Oracle Database: 11.2.0.4


I tried turning on sqlnet tracing, but it doesn't seem to log anything for these bad connections.





Script to drop all databases except one not working


I'm trying to drop all databases but one on my test server. For this, I'm using the below script but it gives error:



EXEC sp_MSforeachdb
'
USE [?]
IF (DB_ID(''?'') > 4 AND DB_NAME()!=''ABC'')
BEGIN
ALTER DATABASE ''?'' SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE ''?''
END
'


Here is the error message:


Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'master'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'IMMEDIATE'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'tempdb'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'IMMEDIATE'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'model'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'IMMEDIATE'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'msdb'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'IMMEDIATE'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'ABC'.

Msg 102, Level 15, State 1, Line 5

Incorrect syntax near 'IMMEDIATE'.


Where should I correct here?

Thank you





CXPACKET SUSPENDED and NULL wait type


I was was running the query in this article:


http://ift.tt/1D22xjz


To see what my threads were waiting on in regards to a suspended query with a wait type of CXPACKET.


However for the SPID in question threads that are running were showing wait types of NULL with every other thread in a SUSPENDED state with a wait type of CXPACKET.


I was expecting to one of the threads having some kind of wait type other than CXPACKET, can anyone explain to me what is happening in this situation?


Thanks





What will be the Better database design for following table?


I am designing database schema for bulk mail system.


campaign table is used to store the campaign details.



Table Campaign
-------------------------------------------------------------------------------
id | subject | message | status | sent | opened | bounced |senttime | User_id
-------------------------------------------------------------------------------


campaign_log table will used to store the every recipents status like he opens mail or mail bounces for this email or sent etc.



Table Campaign_log
--------------------------------
id | campaign_id | email |status
---------------------------------


My questions is



  1. should I make the columns in table campaign for storing the count of sent,opened and bounced mails

  2. Should I make the another table for storing the statistical data like sent,opend and bounced mail ? can it improve a performance.

  3. I like to explain the second question to describe my doubt (may be it silly stupid idea) I am explaining my doubt with above example. In campaign table id,subject message fields are filled by the user while sent, opened and bounced columns are fills after sending the campaign. Is there more unused column at the inserting or updating affects the performance I mean to say when inserting the campaign, the sent opened and bounced are just like unused we are not filiing them

  4. I read about OLTP AND OLAP databases. But I am confuse about my application nature. because there is because my database is going to store the thousands of row in tables such as mailing list also there is more updates operation will be performs and it's also have the reporting, analysis system.


and when updatingthe count for each column sent opened and bounced other columns are unused.


I am sorry I think it's very silly but I am beginnig to database design and have the doubt and I want to clear it.





Counting daily distinct records


I have a table that looks like this in Google Big Query:



custID InteractionDate Purchased? Sales
1 20150312 F 0
1 20150312 T 200
1 20150314 T 150
2 20150221 T 400
2 20150221 F 0
2 20150221 T 120


..that I need to transform to the following structure:



custID InteractionDate Success Sales
1 20150312 0.5 200
1 20150314 1 150
2 20150221 0.66 520


..where in this second table the success column is defined as



count(purchase=Yes)/(count(purchase=Yes) + count(purchase=No)),


and sales is just the sum.


I'm quite new to BQ and no-SQL languages. I simply don't know how to do this, I've searched and seen examples that suggest using partition over but I simply haven't seen any example that does just this. Thanks a lot in advance.





Timeout occurred while waiting for latch: class 'ACCESS_METHODS_HOBT_COUNT'


We have a big reporting database on production server, and we have several SSIS jobs running against it.


Yesterday almost all this-database-related SSIS jobs stuck and running forever. Also the transaction log backup of this database never succeeded. I checked the SQL Server Error Log and I see lots of following errors:



Timeout occurred while waiting for latch: class 'ACCESS_METHODS_HOBT_COUNT', id 00000000EA7449F0, type 2, Task 0x0000000005817DC8 : 12, waittime 66300, flags 0x1018, owning task 0x0000000000000000. Continuing to wait.


Does anyone know what does it mean? And how we can solve it?





Is there a way to improve this query further?


I've got a query that over time I've been improving. Originally it simply selected * from the records table, and the data was formatted on the webapp side. But that was slow and I wanted to make things faster. I found that I could format the output into the correct json in postgres very quickly but the query was UGLY as sin. Then postgres 9.4 came out and brought json_object_agg which made the query nice and simple, and even faster than before, and all I had to do was change the data type of the data column to be json (it had been storing json-formatted data anyway).


The current version of the query is: select json_object_agg(day,data) as jsobj from records


The records table is:



\d records
Table "public.records"
Column | Type | Modifiers
--------+---------+------------------------------------------------------
id | integer | not null default nextval('records_id_seq'::regclass)
day | date |
data | json |
Indexes:
"records_pkey" PRIMARY KEY, btree (id)
"records_day_key" UNIQUE CONSTRAINT, btree (day)


What I'm wondering is this: is there a way to improve the query further? Either making it more readable or more performant?





SELECT COUNT(*) expensive query mysql?


I have following query:



SELECT COUNT(*) AS `numrows`
FROM (`phppos_items`)
WHERE `deleted` = 0


explain:



mysql> explain SELECT COUNT(*) AS `numrows`
-> FROM (`phppos_items`)
-> WHERE `deleted` = 0;
+----+-------------+--------------+------+---------------+---------+---------+-------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------------+------+---------------+---------+---------+-------+------+-------------+
| 1 | SIMPLE | phppos_items | ref | deleted | deleted | 4 | const | 1 | Using index |
+----+-------------+--------------+------+---------------+---------+---------+-------+------+-------------+


Will this query always be indexed AND fast as the rows in the table grow?





Installed Percona-Toolkit, but no sign of xtrabackup


I've installed Percona-Toolkit on my Debian (Wheezy) test Server, using



dpkg -i percona-toolkit.deb
apt-get install --fix-missing -f
dpkg -i percona-toolkit.deb


However, there is no sign of Innobackupex in the /usr/bin folder. Trying to run it simply gives me Command Not Found


I can see all the pt-xxx applications in there suggesting the installation worked ok.


I then tried



apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A


as per the documentation, but that resulted in:



. . .
gpgkeys: key 1C4CBDCDCD2EFD2A not found on keyserver
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0


Followed by



apt-get install xtrabackup


But that gave me



Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package xtrabackup


How can I get xtrabackup (Innobackupex) installed as well?





How to resolve RESOURCE_SEMAPHORE and RESOURCE_SEMAPHORE_QUERY_COMPILE wait types


We are trying to figure out root cause of slow running sql server queries hitting/fetching data from one of the database , size 300 GB, hosted on server with below configuration:



Windows server 2003 R2, SP2, Enterprise Edition, 16 GB RAM , 12 CPU'S 32 Bit


SQL server 2005, SP4, Enterprise Edition, 32 Bit.



We have already informed business on the upgrade to 64 bit which would take over a month.


But for the current issue, we are trying to gather the data if we can resolve the memory pressure or finally come to a conclusion to increase RAM.


Action Completed: Re-indexing and update stats are proper for this DB.


As shown below, we have been noticing the semaphore waittype for past 5 days, ran during the load hours:


waittype


Few info after below queries: size of buffer= 137272



SELECT SUM(virtual_memory_committed_kb)
FROM sys.dm_os_memory_clerks
WHERE type='MEMORYCLERK_SQLBUFFERPOOL'


and semaphore memory= 644024 per below query



SELECT SUM(total_memory_kb)
FROM sys.dm_exec_query_resource_semaphores


Below is some more info gathered from dm_exec_query_resource_semaphores and sys.dm_exec_query_memory_grants dmv's


dmvserror


So from above info gathered and per SP_Blitz data Resource semaphore seems to be the problem.


Is memory 'target_memory_kb' assigned for resource semaphore id's too low, as compared to 16 GB RAM available.


Note* per analysis on 8 hours run 'target_memory_kb' is always under 1 GB, compared to 16 GB available?


what could be the issue here and how to resolve, please suggest


Thanks





Upgrading a database from sql server 2005 to 2012 uncovered a syntax error


I recently upgraded a SQL server from 2005 to 2012.

During validation however, a bug was discovered.


A certain trigger was coded as follows:



CREATE TRIGGER [dbo].[trigger] on [dbo].[table]
FOR UPDATE, UPDATE
AS
UPDATE table
set datetime = GetDate()
FROM table2
WHERE table.id = table2.id
GO


I can safely execute this (oddly) on 2005.

However on 2012 it throws (what I would expect) a syntax error.

Syntax error: Duplicate specification of the action "UPDATE" in the trigger declaration.


Why does this not throw a syntax error on 2005? My google-fu on this has failed me.

Why does this seemingly work on 2005?





Select range of non-identical date from different table


Alrighty, I'm having a problem I cannot for the life of me fix. This is a messy one.


I'm on MySQL5 with innoDB


I have sensor data in multiple tables, all with an ID and a DATETIME field, like so (two tables shown, but there are more). The IDs and DATETIMEs are not identical between tables, as each sensor uploads at a different rate.


temp_data:



INT id
DATETIME ts
FLOAT temp
FLOAT humidity


power_data:



INT id
DATETIME ts
FLOAT lux
FLOAT kW
FLOT volts


My question is: I need to pull all rows within a certain date range from BOTH table- in this case, the last 12 hours, so, the most recent N rows, but in the future, any date range in the past. These will be sorted by the DATETIME value. I've tried every manner of JOIN I'm familiar with but I always get an empty set or an ambiguous column error.


Please help!





sql select with unique_list - loosing an item


I have an oracle database and I have the following problem: If I make an select like this:


1*)



SELECT fk_proddes_id,fk_id_cod_deseu,
LISTAGG (TO_CHAR (cod_eliminare), ',')
WITHIN GROUP (ORDER BY cod_eliminare)
AS cod_eliminare
FROM sim_sd.proddes_ag_elim
GROUP BY fk_proddes_id, fk_id_cod_deseu


I get the following result:



FK_PRODDES_ID FK_ID_COD_DESEU COD_ELIMINARE
53 112 D 11,D 11,D 7
301 506 D 1,D 15


I need the unique code, so if I put:


2*)



SELECT fk_proddes_id,fk_id_cod_deseu,
unique_list ( LISTAGG (TO_CHAR (cod_eliminare), ',')
WITHIN GROUP (ORDER BY cod_eliminare))
AS cod_eliminare
FROM sim_sd.proddes_ag_elim
GROUP BY fk_proddes_id, fk_id_cod_deseu


the result is:



FK_PRODDES_ID FK_ID_COD_DESEU COD_ELIMINARE
53 112 D 11,D 7
301 506 D 15


so, I "loose the D1 - don't know why... If I put


3*)



SELECT fk_proddes_id,fk_id_cod_deseu,
unique_list ( LISTAGG (TO_CHAR (cod_eliminare), ',')
WITHIN GROUP (ORDER BY cod_eliminare DESC))
AS cod_eliminare
FROM sim_sd.proddes_ag_elim
GROUP BY fk_proddes_id, fk_id_cod_deseu


the result is:



FK_PRODDES_ID FK_ID_COD_DESEU COD_ELIMINARE
53 112 D 7,D 11
301 506 D 15,D 1


But I need D1,D15 and D11,D7. What's wrong with 2*) why do I "loose" D1 ???


Thanks!





how to prevent any update to my database before a exp operation?


I want to migrate my database from AIX to Red Hat vai exp/imp.


how to prevent clients make any update to my database during the migration progress ?





migrating from Postgres to MonetDB [on hold]


I have postgres database (24 schemas, about 50 tables for each schema, partitioned tables) and I want to migrate to MonetDB database, but I dont know best way to do this. How can I restore data? I created same tables with DDL statement "create table myshema.mytable(...." and dump



pg_dump -t'myshema.mytable' -a mydatabase > myshema.mytable.dump


But monetdb occurs errors(first comments: syntax error, unexpected IDENT, expecting BINARY or INTO in: "-- then "syntax error, unexpected IDENT, expecting BINARY or INTO in: "") How I should do dump of data? How to restore it? Is it real to migrate from postgres to monetdb?





How we resolve Mysql Enterprise online backup error with "CORRUPT LOG RECORD FOUND" log error


I have configured my production Mysql database with MySQL Enterprise "online Backup" option. The configuration was went through successfully. Enerything is ok, like mysql tables becuse this is my production database. I have 44 tables in mysql database and most of the table is in INNODB engine and also there is MyISAM engines. When the assigned schedular is try to take backup then i am getting below mention log error from mysql database.


MySQL Enterprise Backup version 3.11.1 Windows-6.0-x86 [Tue 11/04/2014 ] Copyright (c) 2003, 2014, Oracle and/or its affiliates. All Rights Reserved.


mysqlbackup: INFO: Starting with following command line ... C:\MySQL Backup\meb-3.11.1-windows-x86-32bit\meb-3.11.1-windows-x86-32bit\bin\mysqlbackup.exe --defaults-file=C:\MySQL Backup\80db2ec0-a3df-11e4-bceb-4c72b9719f9c.cnf --backup-dir=C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00 --show-progress=stdout backup


mysqlbackup: INFO: mysqlbackup: INFO: MySQL server version is '5.7.5-m15-log'. mysqlbackup: INFO: Got some server configuration information from running server.


IMPORTANT: Please check that mysqlbackup run completes successfully. At the end of a successful 'backup' run mysqlbackup prints "mysqlbackup completed OK!".


150124 21:15:01 mysqlbackup: INFO: MEB logfile created at C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00\meta\MEB_2015-01-24.21-15-01_backup.log




Server Repository Options:


datadir = C:\ProgramData\MySQL\MySQL Server 5.7\Data\ innodb_data_home_dir = innodb_data_file_path = ibdata1:12M:autoextend innodb_log_group_home_dir = C:\ProgramData\MySQL\MySQL Server 5.7\Data\ innodb_log_files_in_group = 2 innodb_log_file_size = 50331648 innodb_page_size = 16384 innodb_checksum_algorithm = crc32 innodb_undo_directory = C:\ProgramData\MySQL\MySQL Server 5.7\Data\ innodb_undo_tablespaces = 0 innodb_undo_logs = 128




Backup Config Options:


datadir = C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00\datadir innodb_data_home_dir = C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00\datadir innodb_data_file_path = ibdata1:12M:autoextend innodb_log_group_home_dir = C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00\datadir innodb_log_files_in_group = 2 innodb_log_file_size = 50331648 innodb_page_size = 16384 innodb_checksum_algorithm = crc32 innodb_undo_directory = C:\MySQL Backup\Fullbackup\2015-01-24_21-15-00\datadir innodb_undo_tablespaces = 0 innodb_undo_logs = 128


mysqlbackup: INFO: Unique generated backup id for this is 14221143009241185


mysqlbackup: INFO: Creating 14 buffers each of size 16777216. 150124 21:15:03 mysqlbackup: INFO: Full Backup operation starts with following threads 1 read-threads 6 process-threads 1 write-threads 150124 21:15:03 mysqlbackup: INFO: System tablespace file format is Antelope. 150124 21:15:03 mysqlbackup: INFO: Starting to copy all innodb files... mysqlbackup: INFO: Could not find binlog index file. binlogs will not be copied for this backup. Point-In-Time-Recovery will not be possible. If this is online backup then server may not have started with --log-bin. You may specify its location with --log-bin-index option. Progress: 0 of 111 MB; state: Copying system tablespace 150124 21:15:04 mysqlbackup: INFO: Found checkpoint at lsn 33399309. 150124 21:15:04 mysqlbackup: INFO: Starting log scan from lsn 33399296. 150124 21:15:04 mysqlbackup: INFO: Copying log... 150124 21:15:04 mysqlbackup: INFO: Copying C:\ProgramData\MySQL\MySQL Server 5.7\Data\ibdata1 (Antelope file format). InnoDB: ############### CORRUPT LOG RECORD FOUND InnoDB: Log record type 56, space id 0, page number 0 InnoDB: Log parsing proceeded successfully up to 33399309 InnoDB: Previous log record type 0, is multi 0 InnoDB: Recv offset 0, prev 0 InnoDB: Hex dump of corrupt log starting 100 bytes before the start InnoDB: of the previous log rec, InnoDB: and ending 100 bytes after the start of the corrupt rec: len 200; hex 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000a0008b000000a011000000000000000000102000001020000de31c1800000004380000000001fda20d00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000; asc 8 ; InnoDB: WARNING: the log file may have been corrupt and it InnoDB: is possible that the log scan did not proceed InnoDB: far enough in recovery! Please run CHECK TABLE InnoDB: on your InnoDB tables to check that they are ok! InnoDB: If mysqld crashes after this recovery, look at InnoDB: http://ift.tt/1nYfkMj InnoDB: about forcing recovery. 150124 21:15:04 mysqlbackup: ERROR: FOUND CORRUPT LOG: cannot continue! Progress: 16 of 111 MB; state: Copying system tablespace


mysqlbackup failed with errors!


I am unable to find out the exacr error from my MYSQL error log.





Compressing partition - what about index?


To save space on disk I want to compress some partitions of my tables:



alter table TEST_TAB move partition TEST_TAB_2014 compress for all operations;


I have also indexes on this partition.

Should I recreate indexes after compression, or is it not necessary?





Getting Machine name from where stored procedure was last executed


Is there any way to find out machine IP or machine name from where a particular stored procedure is executed in SQL Server 2005?


We have a client server architecture with .NET as a front end and SQL Server 2005 as back end. Someone in production environment has executed one stored procedure & deleted critical data from .NET application. We are unable to deduce who has done this.


We know the exact time when this has happened. Is there any way we can find the all the connection in SQL Server at that time, so that we can find out machine name from logged connection?





Best practises for DB management


What are the prevalent best practices for managing database of an application?


One option is to have an sql script to create the empty tables, and also to start with hardcoded options. (Say all locations, or profession types). During upgrade, you know whats in there and whats not, so on the server, run an sql script that adds columns and any new data.


Another is to do this programmatically, where everytime the server runs- it checks if the db exists, and if not, create it. And during addition of new columns- there would be code that would check if a certain column exists in this schema, and if not add it with a default value of my choice, and also any new data that should be needed.


My constraints are: 1) Should be blindly able to restore a backup if anything goes wrong. 2) Minimal manual intervention


Am I missing things here that I should add to the constraints? Please clarify your reasoning in relation to the answer.


Also, I hope that since I'm asking for a constrained answer along with sources and reasoning, this wouldn't count as a opinion based question.





How to handle to many inserts?


I have a database which are accessed from remote locations in the world wit very long delays and poor transfer speeds. I currently transfer the data using a transaction since either all or no data should be inserted. Currently the values are inserted one at the time and it takes a total of about 37 seconds for 200 inserts before the commit.


The problem is that there may be 10 sites continuously trying to insert data wit a 2 minutes interval. If they fail they will retry until they succeeded. This adds up to an impossible situation as they will start growing a backlog.


I have understand I can use snapshot as Isolation level to allow reads in parallel. But how should I handle the parallel insets? How do banks and other system do it since they should also get a backlog if they do it in a queue?





MySQL Workbench: How to detach reverse-engineered EER diagram from existing database?


MySQL Workbench provides the option to 'Create EER Model From Existing Database'. Trying to remove a table from the resulting diagram leads to a dialog asking:



Should corresponding database objects be deleted with the figures?



As the database must be left untouched in any case, my question is now: How to safely detach the EER model / diagram from the underlying database?





how effective creating 2 database with same content?


I'm in a dilemma if creating redundant database will speed up the access considering only the CRUD factor.


Is it effective to create a dedicated DB for creating/updating/deleting records and a separate DB for viewing only?


Currently using MySQL.





How to Insert geom line in geom geometry column?


I am a beginners and I have the coordinates of a lot of points in a table. In pgadmin I want to make a line that passes on these points. I already have my Coordinate point im my table. I want to make line with those cooirdinate point that already have in my table. In the same table I want to INSERT geom line in the geom geometry column


enter image description here Description of the table: ID,X,Y, description column, length, description column. I want to make the line that passes on this point.


I have done this query:



SELECT AddGeometryColumn('MyTable', 'geom', 32634, 'LINESTRING', 2);
UPDATE "MyTable"
SET geom = ST_SetSRID(ST_MakeLINE((ST_MakePOINT1("X", "Y"), ST_MakePOINT2("X", "Y")), 32634));


but it doesn't work. Can anyone help me please?





Order field values in round-robin in MySQL


I have table like this:



id order_number
1 20
2 20
3 20
4 21
5 21
6 21
7 21
8 21
9 21
10 22
11 23
12 23
13 24


I need to arrange the rows such that the order_numbers are arranged round-robin sequence (20,21,22,23,24 then again 20,21,22,23,24 and again 20 etc..), i.e.



id order_number
1 20
4 21
10 22
11 23
13 24
2 20
5 21
12 23
3 20
6 21
7 21
8 21
9 21


Does anyone know how to write the query for this? Thanks in advance!





Registered Server window does not show Database Engine or Local Server Groups


I am running SQL Server Management Studio 2012 on a Windows 7 machine. After a failed attempt to import my registred servers from SSMS 2008, the icons for Database Engine and Local Server Groups are not displayed in the Registered Servers window: the Registred Servers window is empty.


enter image description here


As such I cannot register a server from the SSMS GUI, by right clicking the Local Server Groups.


How can I restore the Database Engine and Local Server Groups items?





How can I decide Rebuilding Index with DETAILED or SAMPLED on MSSQL


I want to rebuild my indexes and statistics on DB.


How can I select this option on production maintenance.


SQL Server Version : Microsoft SQL Server 2012 - 11.0.2100.60 (X64) (Build 7601: Service Pack 1)





Why is MySQL not saving inserted data?


So I have the following insert statement that I run through MySQL Workbench:



INSERT INTO PeakOnlineCount
(time_stamp, LocationId, Count)
values
('2015-03-26', 0, 1)


If I run the following query:



SELECT * FROM PeakOnlineCount


straight after in the same tab, I can see the results. If I disconnect, reconnect and run the select query the table seems empty.


If I run the insert statement on one computer and run the select from another, the table is empty.


I'm not using transactions or anything like that. What's going on?


(And yes I know that I'm using SELECT *, this is only for the question)


EDIT: Here's the structure of the table:



CREATE TABLE `PeakOnlineCount` (
`time_stamp` date NOT NULL,
`LocationId` int(11) NOT NULL,
`Count` int(11) NOT NULL,
KEY `IX_LocationDate` (`time_stamp`,`LocationId`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;




Clean SQL Server database [on hold]


I have a database with one .mdf file of 3G and a log file. I moved two tables with 1.5G of data to a new file group and still the .mdf file is 3G in size


What should I do to decrease the size of data file (.mdf) file?





LEFT JOIN DEFAULT NULL VALUE


I had query simplified like this :



SELECT a.id, a.name, b.* FROM a LEFT JOIN b ON a.id=b.id



Which b is from group concat select (very dynamic column). Sometime there are 3 column in b, sometime 10 column depend on filtering.


What i want is something like this :



SELECT a.id, a.name, IFNULL(b.*) FROM a LEFT JOIN b ON a.id=b.id



I already look here and here and the answer is always repeating per column. But i can't apply ifnull per columns because i didn't know what the column is. There are any other aproach for replacing null values from left join ?





on every row on SELECT Query must lookup to other table


I'm working on Login/Logout System and I'm facing this problem regarding on SELECT Query. I have 2 tables namely dtr_tbl and shift_tbl. In dtr_tbl are the logs of the user(login/logout time details) and in shift_tbl are the list of shifts available for the users.


>> dtr_tbl


enter image description here


>> shift_tbl (please ignore dates on shifting_timeFr and shiftingtimeTo Columns, just focus on time )


enter image description here


What I wanted to do is compute the TotalHrs per day and per shifting_code . (see below my sample sql)



SELECT
user_id,
CONVERT(varchar,log_date,101) AS [LogDate],
MIN(d.login_time) AS [Login],
MAX(d.logout_time) AS [Logout],
d.shift_code,
CASE WHEN MIN(d.login_time) <= -->> validate if `Login` time of user is valid within given `gracetime period`
CAST(
CAST(CONVERT(DATE, -->> extract `Date` only from user's `login` time.
DATEADD(
hh,
IIF(sh.shifting_LVtime=1,-15,0), -->> check if toggle for LVtime(just ignore this)
MIN(d.login_time) -->> current login time of user
)) AS DATETIME) +
CAST(CONVERT(TIME, -->> extract `Time` only from table `shift_tbl` on `shifting_timeFr` column
DATEADD( -->> add `15 minutes` for given `gracetime` period
MINUTE,
sh.shifting_gracetime, -->> 15 mins given `gracetime` period from shift_tbl
sh.shifting_timefr --> from `shift_tbl`
)) AS DATETIME)
AS DATETIME) THEN
DATEDIFF(MINUTE,
CAST(CONVERT(
DATE,
MIN(d.login_time)
) AS DATETIME) +
CAST(CONVERT(
TIME,
sh.shifting_timefr
) AS DATETIME)
,MAX(d.logout_time))/60.0
ELSE
DATEDIFF(MINUTE,MIN(d.login_time),
MAX(d.logout_time))/60.0
END AS [TotalHrs]
FROM tblDTR d
INNER JOIN tblShiftings sh
ON d.shift_code = sh.shifting_code
WHERE user_id = 'USER1'
AND d.log_date
BETWEEN CAST('03/13/2015' AS datetime)
AND CAST('03/23/2015' AS datetime)
GROUP BY
d.user_id,
d.shift_code,
sh.shifting_gracetime,
sh.shifting_timefr,
sh.shifting_timeto,
sh.shifting_LVtime,
CONVERT(varchar,d.log_date,101)
ORDER BY LogDate ASC


My Sample code above is not working, it only work on shift_code with DS01, I also want to compute the other shift_code from dtr_tbl.


>> Result:


enter image description here


thanks.